00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 32 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3531 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.010 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.011 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.026 Fetching changes from the remote Git repository 00:00:00.028 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.038 Using shallow fetch with depth 1 00:00:00.038 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.038 > git --version # timeout=10 00:00:00.050 > git --version # 'git version 2.39.2' 00:00:00.050 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.064 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.064 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.074 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.085 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.097 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:02.097 > git config core.sparsecheckout # timeout=10 00:00:02.107 > git read-tree -mu HEAD # timeout=10 00:00:02.121 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:02.139 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:02.139 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:02.223 [Pipeline] Start of Pipeline 00:00:02.235 [Pipeline] library 00:00:02.237 Loading library shm_lib@master 00:00:02.237 Library shm_lib@master is cached. Copying from home. 00:00:02.262 [Pipeline] node 00:00:02.273 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.275 [Pipeline] { 00:00:02.287 [Pipeline] catchError 00:00:02.288 [Pipeline] { 00:00:02.302 [Pipeline] wrap 00:00:02.313 [Pipeline] { 00:00:02.321 [Pipeline] stage 00:00:02.323 [Pipeline] { (Prologue) 00:00:02.579 [Pipeline] sh 00:00:02.875 + logger -p user.info -t JENKINS-CI 00:00:02.896 [Pipeline] echo 00:00:02.898 Node: CYP9 00:00:02.907 [Pipeline] sh 00:00:03.265 [Pipeline] setCustomBuildProperty 00:00:03.279 [Pipeline] echo 00:00:03.280 Cleanup processes 00:00:03.287 [Pipeline] sh 00:00:03.586 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.586 3144655 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.603 [Pipeline] sh 00:00:03.896 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.896 ++ grep -v 'sudo pgrep' 00:00:03.896 ++ awk '{print $1}' 00:00:03.896 + sudo kill -9 00:00:03.896 + true 00:00:03.914 [Pipeline] cleanWs 00:00:03.926 [WS-CLEANUP] Deleting project workspace... 00:00:03.926 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.934 [WS-CLEANUP] done 00:00:03.939 [Pipeline] setCustomBuildProperty 00:00:03.956 [Pipeline] sh 00:00:04.247 + sudo git config --global --replace-all safe.directory '*' 00:00:04.372 [Pipeline] httpRequest 00:00:04.789 [Pipeline] echo 00:00:04.791 Sorcerer 10.211.164.101 is alive 00:00:04.801 [Pipeline] retry 00:00:04.803 [Pipeline] { 00:00:04.818 [Pipeline] httpRequest 00:00:04.823 HttpMethod: GET 00:00:04.823 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:04.824 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:04.832 Response Code: HTTP/1.1 200 OK 00:00:04.832 Success: Status code 200 is in the accepted range: 200,404 00:00:04.833 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.989 [Pipeline] } 00:00:06.006 [Pipeline] // retry 00:00:06.013 [Pipeline] sh 00:00:06.302 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.319 [Pipeline] httpRequest 00:00:07.286 [Pipeline] echo 00:00:07.288 Sorcerer 10.211.164.101 is alive 00:00:07.298 [Pipeline] retry 00:00:07.300 [Pipeline] { 00:00:07.314 [Pipeline] httpRequest 00:00:07.319 HttpMethod: GET 00:00:07.319 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:07.320 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:07.335 Response Code: HTTP/1.1 200 OK 00:00:07.335 Success: Status code 200 is in the accepted range: 200,404 00:00:07.335 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:31.153 [Pipeline] } 00:00:31.170 [Pipeline] // retry 00:00:31.179 [Pipeline] sh 00:00:31.470 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:34.789 [Pipeline] sh 00:00:35.082 + git -C spdk log --oneline -n5 00:00:35.082 b18e1bd62 version: v24.09.1-pre 00:00:35.082 19524ad45 version: v24.09 00:00:35.083 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:35.083 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:35.083 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:35.105 [Pipeline] withCredentials 00:00:35.117 > git --version # timeout=10 00:00:35.131 > git --version # 'git version 2.39.2' 00:00:35.152 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.155 [Pipeline] { 00:00:35.164 [Pipeline] retry 00:00:35.167 [Pipeline] { 00:00:35.183 [Pipeline] sh 00:00:35.473 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:35.474 fatal: unable to access 'http://dpdk.org/git/dpdk-stable/': The requested URL returned error: 502 00:00:35.490 [Pipeline] } 00:00:35.495 ERROR: script returned exit code 128 00:00:35.495 Retrying 00:00:35.514 [Pipeline] { 00:00:35.522 [Pipeline] sh 00:00:35.811 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:45.836 [Pipeline] } 00:00:45.855 [Pipeline] // retry 00:00:45.860 [Pipeline] } 00:00:45.877 [Pipeline] // withCredentials 00:00:45.886 [Pipeline] httpRequest 00:00:46.291 [Pipeline] echo 00:00:46.293 Sorcerer 10.211.164.101 is alive 00:00:46.303 [Pipeline] retry 00:00:46.305 [Pipeline] { 00:00:46.319 [Pipeline] httpRequest 00:00:46.324 HttpMethod: GET 00:00:46.325 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:46.325 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:46.329 Response Code: HTTP/1.1 200 OK 00:00:46.330 Success: Status code 200 is in the accepted range: 200,404 00:00:46.330 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:47.714 [Pipeline] } 00:00:47.730 [Pipeline] // retry 00:00:47.736 [Pipeline] sh 00:00:48.027 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:49.956 [Pipeline] sh 00:00:50.244 + git -C dpdk log --oneline -n5 00:00:50.244 eeb0605f11 version: 23.11.0 00:00:50.244 238778122a doc: update release notes for 23.11 00:00:50.244 46aa6b3cfc doc: fix description of RSS features 00:00:50.244 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:50.244 7e421ae345 devtools: support skipping forbid rule check 00:00:50.255 [Pipeline] } 00:00:50.269 [Pipeline] // stage 00:00:50.277 [Pipeline] stage 00:00:50.280 [Pipeline] { (Prepare) 00:00:50.298 [Pipeline] writeFile 00:00:50.313 [Pipeline] sh 00:00:50.601 + logger -p user.info -t JENKINS-CI 00:00:50.615 [Pipeline] sh 00:00:50.903 + logger -p user.info -t JENKINS-CI 00:00:50.915 [Pipeline] sh 00:00:51.202 + cat autorun-spdk.conf 00:00:51.202 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.202 SPDK_TEST_NVMF=1 00:00:51.202 SPDK_TEST_NVME_CLI=1 00:00:51.202 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.202 SPDK_TEST_NVMF_NICS=e810 00:00:51.202 SPDK_TEST_VFIOUSER=1 00:00:51.202 SPDK_RUN_UBSAN=1 00:00:51.202 NET_TYPE=phy 00:00:51.202 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:51.202 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.211 RUN_NIGHTLY=1 00:00:51.215 [Pipeline] readFile 00:00:51.240 [Pipeline] withEnv 00:00:51.242 [Pipeline] { 00:00:51.254 [Pipeline] sh 00:00:51.543 + set -ex 00:00:51.543 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:51.543 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:51.543 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.543 ++ SPDK_TEST_NVMF=1 00:00:51.543 ++ SPDK_TEST_NVME_CLI=1 00:00:51.543 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.543 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.543 ++ SPDK_TEST_VFIOUSER=1 00:00:51.543 ++ SPDK_RUN_UBSAN=1 00:00:51.543 ++ NET_TYPE=phy 00:00:51.543 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:51.543 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:51.543 ++ RUN_NIGHTLY=1 00:00:51.543 + case $SPDK_TEST_NVMF_NICS in 00:00:51.543 + DRIVERS=ice 00:00:51.543 + [[ tcp == \r\d\m\a ]] 00:00:51.543 + [[ -n ice ]] 00:00:51.543 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:51.543 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:51.543 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:51.543 rmmod: ERROR: Module irdma is not currently loaded 00:00:51.543 rmmod: ERROR: Module i40iw is not currently loaded 00:00:51.543 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:51.543 + true 00:00:51.543 + for D in $DRIVERS 00:00:51.543 + sudo modprobe ice 00:00:51.543 + exit 0 00:00:51.554 [Pipeline] } 00:00:51.568 [Pipeline] // withEnv 00:00:51.573 [Pipeline] } 00:00:51.587 [Pipeline] // stage 00:00:51.597 [Pipeline] catchError 00:00:51.599 [Pipeline] { 00:00:51.614 [Pipeline] timeout 00:00:51.614 Timeout set to expire in 1 hr 0 min 00:00:51.616 [Pipeline] { 00:00:51.629 [Pipeline] stage 00:00:51.631 [Pipeline] { (Tests) 00:00:51.645 [Pipeline] sh 00:00:51.934 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.934 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.934 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.934 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:51.934 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.934 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.934 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:51.934 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.934 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:51.934 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:51.934 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:51.934 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:51.934 + source /etc/os-release 00:00:51.934 ++ NAME='Fedora Linux' 00:00:51.934 ++ VERSION='39 (Cloud Edition)' 00:00:51.934 ++ ID=fedora 00:00:51.935 ++ VERSION_ID=39 00:00:51.935 ++ VERSION_CODENAME= 00:00:51.935 ++ PLATFORM_ID=platform:f39 00:00:51.935 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:51.935 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:51.935 ++ LOGO=fedora-logo-icon 00:00:51.935 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:51.935 ++ HOME_URL=https://fedoraproject.org/ 00:00:51.935 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:51.935 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:51.935 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:51.935 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:51.935 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:51.935 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:51.935 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:51.935 ++ SUPPORT_END=2024-11-12 00:00:51.935 ++ VARIANT='Cloud Edition' 00:00:51.935 ++ VARIANT_ID=cloud 00:00:51.935 + uname -a 00:00:51.935 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:51.935 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.309 Hugepages 00:00:55.309 node hugesize free / total 00:00:55.309 node0 1048576kB 0 / 0 00:00:55.309 node0 2048kB 0 / 0 00:00:55.309 node1 1048576kB 0 / 0 00:00:55.309 node1 2048kB 0 / 0 00:00:55.309 00:00:55.309 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.309 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:55.309 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:55.309 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:55.309 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:55.309 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:55.309 + rm -f /tmp/spdk-ld-path 00:00:55.309 + source autorun-spdk.conf 00:00:55.309 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.309 ++ SPDK_TEST_NVMF=1 00:00:55.309 ++ SPDK_TEST_NVME_CLI=1 00:00:55.309 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.309 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.309 ++ SPDK_TEST_VFIOUSER=1 00:00:55.309 ++ SPDK_RUN_UBSAN=1 00:00:55.309 ++ NET_TYPE=phy 00:00:55.309 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:55.309 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.309 ++ RUN_NIGHTLY=1 00:00:55.309 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.309 + [[ -n '' ]] 00:00:55.309 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.309 + for M in /var/spdk/build-*-manifest.txt 00:00:55.309 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.309 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.309 + for M in /var/spdk/build-*-manifest.txt 00:00:55.309 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.309 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.309 + for M in /var/spdk/build-*-manifest.txt 00:00:55.309 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.309 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.309 ++ uname 00:00:55.309 + [[ Linux == \L\i\n\u\x ]] 00:00:55.309 + sudo dmesg -T 00:00:55.309 + sudo dmesg --clear 00:00:55.309 + dmesg_pid=3145685 00:00:55.309 + [[ Fedora Linux == FreeBSD ]] 00:00:55.309 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.309 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.309 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.309 + sudo dmesg -Tw 00:00:55.309 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.309 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.309 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.309 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.309 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.309 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.309 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.309 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.309 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.309 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.309 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.309 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.309 Test configuration: 00:00:55.309 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.309 SPDK_TEST_NVMF=1 00:00:55.309 SPDK_TEST_NVME_CLI=1 00:00:55.309 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.309 SPDK_TEST_NVMF_NICS=e810 00:00:55.309 SPDK_TEST_VFIOUSER=1 00:00:55.309 SPDK_RUN_UBSAN=1 00:00:55.309 NET_TYPE=phy 00:00:55.309 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:55.309 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.309 RUN_NIGHTLY=1 21:49:13 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:55.309 21:49:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:55.309 21:49:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:55.309 21:49:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.309 21:49:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.309 21:49:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.309 21:49:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.309 21:49:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.309 21:49:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.309 21:49:13 -- paths/export.sh@5 -- $ export PATH 00:00:55.310 21:49:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.310 21:49:13 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:55.310 21:49:13 -- common/autobuild_common.sh@479 -- $ date +%s 00:00:55.310 21:49:13 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728762553.XXXXXX 00:00:55.310 21:49:13 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728762553.i1pUJj 00:00:55.310 21:49:13 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:00:55.310 21:49:13 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:00:55.310 21:49:13 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:55.310 21:49:13 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:55.310 21:49:13 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.310 21:49:13 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.310 21:49:13 -- common/autobuild_common.sh@495 -- $ get_config_params 00:00:55.310 21:49:13 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:55.310 21:49:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.310 21:49:13 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:55.310 21:49:13 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:00:55.310 21:49:13 -- pm/common@17 -- $ local monitor 00:00:55.310 21:49:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.310 21:49:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.310 21:49:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.310 21:49:13 -- pm/common@21 -- $ date +%s 00:00:55.310 21:49:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.310 21:49:13 -- pm/common@25 -- $ sleep 1 00:00:55.310 21:49:13 -- pm/common@21 -- $ date +%s 00:00:55.310 21:49:13 -- pm/common@21 -- $ date +%s 00:00:55.310 21:49:13 -- pm/common@21 -- $ date +%s 00:00:55.310 21:49:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728762553 00:00:55.310 21:49:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728762553 00:00:55.310 21:49:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728762553 00:00:55.310 21:49:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728762553 00:00:55.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728762553_collect-cpu-load.pm.log 00:00:55.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728762553_collect-vmstat.pm.log 00:00:55.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728762553_collect-cpu-temp.pm.log 00:00:55.571 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728762553_collect-bmc-pm.bmc.pm.log 00:00:56.513 21:49:14 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:00:56.513 21:49:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:56.513 21:49:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:56.513 21:49:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.513 21:49:14 -- spdk/autobuild.sh@16 -- $ date -u 00:00:56.513 Sat Oct 12 07:49:14 PM UTC 2024 00:00:56.513 21:49:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:56.513 v24.09-rc1-9-gb18e1bd62 00:00:56.513 21:49:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:56.513 21:49:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:56.513 21:49:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:56.513 21:49:14 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:56.513 21:49:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:56.513 21:49:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.513 ************************************ 00:00:56.513 START TEST ubsan 00:00:56.513 ************************************ 00:00:56.513 21:49:14 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:56.513 using ubsan 00:00:56.513 00:00:56.513 real 0m0.001s 00:00:56.513 user 0m0.000s 00:00:56.513 sys 0m0.000s 00:00:56.513 21:49:14 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:56.513 21:49:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:56.513 ************************************ 00:00:56.513 END TEST ubsan 00:00:56.513 ************************************ 00:00:56.513 21:49:14 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:56.513 21:49:14 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:56.513 21:49:14 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:56.513 21:49:14 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:56.513 21:49:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:56.513 21:49:14 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.513 ************************************ 00:00:56.513 START TEST build_native_dpdk 00:00:56.513 ************************************ 00:00:56.513 21:49:14 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.513 21:49:14 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:56.513 eeb0605f11 version: 23.11.0 00:00:56.513 238778122a doc: update release notes for 23.11 00:00:56.513 46aa6b3cfc doc: fix description of RSS features 00:00:56.513 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:56.513 7e421ae345 devtools: support skipping forbid rule check 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:56.514 patching file config/rte_config.h 00:00:56.514 Hunk #1 succeeded at 60 (offset 1 line). 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:00:56.514 21:49:14 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:00:56.514 21:49:14 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:00:56.514 patching file lib/pcapng/rte_pcapng.c 00:00:56.514 21:49:15 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:00:56.776 21:49:15 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:00:56.776 21:49:15 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:00:56.776 21:49:15 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:00:56.776 21:49:15 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:00:56.776 21:49:15 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:56.776 21:49:15 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:02.064 The Meson build system 00:01:02.064 Version: 1.5.0 00:01:02.064 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:02.064 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:02.064 Build type: native build 00:01:02.064 Program cat found: YES (/usr/bin/cat) 00:01:02.064 Project name: DPDK 00:01:02.064 Project version: 23.11.0 00:01:02.064 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:02.064 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:02.064 Host machine cpu family: x86_64 00:01:02.064 Host machine cpu: x86_64 00:01:02.064 Message: ## Building in Developer Mode ## 00:01:02.064 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:02.064 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:02.064 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:02.064 Program python3 found: YES (/usr/bin/python3) 00:01:02.064 Program cat found: YES (/usr/bin/cat) 00:01:02.064 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:02.064 Compiler for C supports arguments -march=native: YES 00:01:02.064 Checking for size of "void *" : 8 00:01:02.064 Checking for size of "void *" : 8 (cached) 00:01:02.064 Library m found: YES 00:01:02.064 Library numa found: YES 00:01:02.064 Has header "numaif.h" : YES 00:01:02.064 Library fdt found: NO 00:01:02.064 Library execinfo found: NO 00:01:02.064 Has header "execinfo.h" : YES 00:01:02.064 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:02.064 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:02.064 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:02.064 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:02.064 Run-time dependency openssl found: YES 3.1.1 00:01:02.064 Run-time dependency libpcap found: YES 1.10.4 00:01:02.064 Has header "pcap.h" with dependency libpcap: YES 00:01:02.064 Compiler for C supports arguments -Wcast-qual: YES 00:01:02.064 Compiler for C supports arguments -Wdeprecated: YES 00:01:02.064 Compiler for C supports arguments -Wformat: YES 00:01:02.064 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:02.064 Compiler for C supports arguments -Wformat-security: NO 00:01:02.064 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:02.064 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:02.064 Compiler for C supports arguments -Wnested-externs: YES 00:01:02.064 Compiler for C supports arguments -Wold-style-definition: YES 00:01:02.064 Compiler for C supports arguments -Wpointer-arith: YES 00:01:02.064 Compiler for C supports arguments -Wsign-compare: YES 00:01:02.064 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:02.064 Compiler for C supports arguments -Wundef: YES 00:01:02.064 Compiler for C supports arguments -Wwrite-strings: YES 00:01:02.064 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:02.064 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:02.064 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:02.064 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:02.064 Program objdump found: YES (/usr/bin/objdump) 00:01:02.064 Compiler for C supports arguments -mavx512f: YES 00:01:02.064 Checking if "AVX512 checking" compiles: YES 00:01:02.064 Fetching value of define "__SSE4_2__" : 1 00:01:02.064 Fetching value of define "__AES__" : 1 00:01:02.064 Fetching value of define "__AVX__" : 1 00:01:02.064 Fetching value of define "__AVX2__" : 1 00:01:02.064 Fetching value of define "__AVX512BW__" : 1 00:01:02.064 Fetching value of define "__AVX512CD__" : 1 00:01:02.064 Fetching value of define "__AVX512DQ__" : 1 00:01:02.064 Fetching value of define "__AVX512F__" : 1 00:01:02.064 Fetching value of define "__AVX512VL__" : 1 00:01:02.064 Fetching value of define "__PCLMUL__" : 1 00:01:02.064 Fetching value of define "__RDRND__" : 1 00:01:02.064 Fetching value of define "__RDSEED__" : 1 00:01:02.064 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:02.064 Fetching value of define "__znver1__" : (undefined) 00:01:02.064 Fetching value of define "__znver2__" : (undefined) 00:01:02.064 Fetching value of define "__znver3__" : (undefined) 00:01:02.064 Fetching value of define "__znver4__" : (undefined) 00:01:02.064 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:02.064 Message: lib/log: Defining dependency "log" 00:01:02.064 Message: lib/kvargs: Defining dependency "kvargs" 00:01:02.064 Message: lib/telemetry: Defining dependency "telemetry" 00:01:02.064 Checking for function "getentropy" : NO 00:01:02.064 Message: lib/eal: Defining dependency "eal" 00:01:02.064 Message: lib/ring: Defining dependency "ring" 00:01:02.064 Message: lib/rcu: Defining dependency "rcu" 00:01:02.064 Message: lib/mempool: Defining dependency "mempool" 00:01:02.064 Message: lib/mbuf: Defining dependency "mbuf" 00:01:02.064 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:02.064 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:02.064 Compiler for C supports arguments -mpclmul: YES 00:01:02.064 Compiler for C supports arguments -maes: YES 00:01:02.064 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:02.064 Compiler for C supports arguments -mavx512bw: YES 00:01:02.064 Compiler for C supports arguments -mavx512dq: YES 00:01:02.064 Compiler for C supports arguments -mavx512vl: YES 00:01:02.064 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:02.064 Compiler for C supports arguments -mavx2: YES 00:01:02.064 Compiler for C supports arguments -mavx: YES 00:01:02.064 Message: lib/net: Defining dependency "net" 00:01:02.064 Message: lib/meter: Defining dependency "meter" 00:01:02.064 Message: lib/ethdev: Defining dependency "ethdev" 00:01:02.064 Message: lib/pci: Defining dependency "pci" 00:01:02.064 Message: lib/cmdline: Defining dependency "cmdline" 00:01:02.064 Message: lib/metrics: Defining dependency "metrics" 00:01:02.064 Message: lib/hash: Defining dependency "hash" 00:01:02.064 Message: lib/timer: Defining dependency "timer" 00:01:02.064 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:02.064 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:02.064 Message: lib/acl: Defining dependency "acl" 00:01:02.064 Message: lib/bbdev: Defining dependency "bbdev" 00:01:02.065 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:02.065 Run-time dependency libelf found: YES 0.191 00:01:02.065 Message: lib/bpf: Defining dependency "bpf" 00:01:02.065 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:02.065 Message: lib/compressdev: Defining dependency "compressdev" 00:01:02.065 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:02.065 Message: lib/distributor: Defining dependency "distributor" 00:01:02.065 Message: lib/dmadev: Defining dependency "dmadev" 00:01:02.065 Message: lib/efd: Defining dependency "efd" 00:01:02.065 Message: lib/eventdev: Defining dependency "eventdev" 00:01:02.065 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:02.065 Message: lib/gpudev: Defining dependency "gpudev" 00:01:02.065 Message: lib/gro: Defining dependency "gro" 00:01:02.065 Message: lib/gso: Defining dependency "gso" 00:01:02.065 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:02.065 Message: lib/jobstats: Defining dependency "jobstats" 00:01:02.065 Message: lib/latencystats: Defining dependency "latencystats" 00:01:02.065 Message: lib/lpm: Defining dependency "lpm" 00:01:02.065 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:02.065 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:02.065 Fetching value of define "__AVX512IFMA__" : 1 00:01:02.065 Message: lib/member: Defining dependency "member" 00:01:02.065 Message: lib/pcapng: Defining dependency "pcapng" 00:01:02.065 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:02.065 Message: lib/power: Defining dependency "power" 00:01:02.065 Message: lib/rawdev: Defining dependency "rawdev" 00:01:02.065 Message: lib/regexdev: Defining dependency "regexdev" 00:01:02.065 Message: lib/mldev: Defining dependency "mldev" 00:01:02.065 Message: lib/rib: Defining dependency "rib" 00:01:02.065 Message: lib/reorder: Defining dependency "reorder" 00:01:02.065 Message: lib/sched: Defining dependency "sched" 00:01:02.065 Message: lib/security: Defining dependency "security" 00:01:02.065 Message: lib/stack: Defining dependency "stack" 00:01:02.065 Has header "linux/userfaultfd.h" : YES 00:01:02.065 Has header "linux/vduse.h" : YES 00:01:02.065 Message: lib/vhost: Defining dependency "vhost" 00:01:02.065 Message: lib/ipsec: Defining dependency "ipsec" 00:01:02.065 Message: lib/pdcp: Defining dependency "pdcp" 00:01:02.065 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:02.065 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:02.065 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:02.065 Message: lib/fib: Defining dependency "fib" 00:01:02.065 Message: lib/port: Defining dependency "port" 00:01:02.065 Message: lib/pdump: Defining dependency "pdump" 00:01:02.065 Message: lib/table: Defining dependency "table" 00:01:02.065 Message: lib/pipeline: Defining dependency "pipeline" 00:01:02.065 Message: lib/graph: Defining dependency "graph" 00:01:02.065 Message: lib/node: Defining dependency "node" 00:01:02.065 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:02.065 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:02.065 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:03.452 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:03.452 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:03.452 Compiler for C supports arguments -Wno-unused-value: YES 00:01:03.452 Compiler for C supports arguments -Wno-format: YES 00:01:03.452 Compiler for C supports arguments -Wno-format-security: YES 00:01:03.452 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:03.452 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:03.452 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:03.452 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:03.452 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:03.452 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:03.452 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:03.452 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:03.452 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:03.452 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:03.452 Has header "sys/epoll.h" : YES 00:01:03.452 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:03.452 Configuring doxy-api-html.conf using configuration 00:01:03.452 Configuring doxy-api-man.conf using configuration 00:01:03.452 Program mandb found: YES (/usr/bin/mandb) 00:01:03.452 Program sphinx-build found: NO 00:01:03.452 Configuring rte_build_config.h using configuration 00:01:03.452 Message: 00:01:03.452 ================= 00:01:03.452 Applications Enabled 00:01:03.452 ================= 00:01:03.452 00:01:03.452 apps: 00:01:03.452 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:03.452 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:03.452 test-pmd, test-regex, test-sad, test-security-perf, 00:01:03.452 00:01:03.452 Message: 00:01:03.452 ================= 00:01:03.452 Libraries Enabled 00:01:03.452 ================= 00:01:03.452 00:01:03.452 libs: 00:01:03.452 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:03.452 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:03.452 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:03.452 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:03.452 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:03.452 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:03.452 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:03.452 00:01:03.452 00:01:03.452 Message: 00:01:03.452 =============== 00:01:03.452 Drivers Enabled 00:01:03.452 =============== 00:01:03.452 00:01:03.452 common: 00:01:03.452 00:01:03.452 bus: 00:01:03.452 pci, vdev, 00:01:03.452 mempool: 00:01:03.452 ring, 00:01:03.452 dma: 00:01:03.452 00:01:03.452 net: 00:01:03.452 i40e, 00:01:03.452 raw: 00:01:03.452 00:01:03.452 crypto: 00:01:03.452 00:01:03.452 compress: 00:01:03.452 00:01:03.452 regex: 00:01:03.452 00:01:03.452 ml: 00:01:03.452 00:01:03.452 vdpa: 00:01:03.452 00:01:03.452 event: 00:01:03.452 00:01:03.452 baseband: 00:01:03.452 00:01:03.452 gpu: 00:01:03.452 00:01:03.452 00:01:03.452 Message: 00:01:03.452 ================= 00:01:03.452 Content Skipped 00:01:03.452 ================= 00:01:03.452 00:01:03.452 apps: 00:01:03.452 00:01:03.452 libs: 00:01:03.452 00:01:03.452 drivers: 00:01:03.452 common/cpt: not in enabled drivers build config 00:01:03.452 common/dpaax: not in enabled drivers build config 00:01:03.452 common/iavf: not in enabled drivers build config 00:01:03.452 common/idpf: not in enabled drivers build config 00:01:03.452 common/mvep: not in enabled drivers build config 00:01:03.453 common/octeontx: not in enabled drivers build config 00:01:03.453 bus/auxiliary: not in enabled drivers build config 00:01:03.453 bus/cdx: not in enabled drivers build config 00:01:03.453 bus/dpaa: not in enabled drivers build config 00:01:03.453 bus/fslmc: not in enabled drivers build config 00:01:03.453 bus/ifpga: not in enabled drivers build config 00:01:03.453 bus/platform: not in enabled drivers build config 00:01:03.453 bus/vmbus: not in enabled drivers build config 00:01:03.453 common/cnxk: not in enabled drivers build config 00:01:03.453 common/mlx5: not in enabled drivers build config 00:01:03.453 common/nfp: not in enabled drivers build config 00:01:03.453 common/qat: not in enabled drivers build config 00:01:03.453 common/sfc_efx: not in enabled drivers build config 00:01:03.453 mempool/bucket: not in enabled drivers build config 00:01:03.453 mempool/cnxk: not in enabled drivers build config 00:01:03.453 mempool/dpaa: not in enabled drivers build config 00:01:03.453 mempool/dpaa2: not in enabled drivers build config 00:01:03.453 mempool/octeontx: not in enabled drivers build config 00:01:03.453 mempool/stack: not in enabled drivers build config 00:01:03.453 dma/cnxk: not in enabled drivers build config 00:01:03.453 dma/dpaa: not in enabled drivers build config 00:01:03.453 dma/dpaa2: not in enabled drivers build config 00:01:03.453 dma/hisilicon: not in enabled drivers build config 00:01:03.453 dma/idxd: not in enabled drivers build config 00:01:03.453 dma/ioat: not in enabled drivers build config 00:01:03.453 dma/skeleton: not in enabled drivers build config 00:01:03.453 net/af_packet: not in enabled drivers build config 00:01:03.453 net/af_xdp: not in enabled drivers build config 00:01:03.453 net/ark: not in enabled drivers build config 00:01:03.453 net/atlantic: not in enabled drivers build config 00:01:03.453 net/avp: not in enabled drivers build config 00:01:03.453 net/axgbe: not in enabled drivers build config 00:01:03.453 net/bnx2x: not in enabled drivers build config 00:01:03.453 net/bnxt: not in enabled drivers build config 00:01:03.453 net/bonding: not in enabled drivers build config 00:01:03.453 net/cnxk: not in enabled drivers build config 00:01:03.453 net/cpfl: not in enabled drivers build config 00:01:03.453 net/cxgbe: not in enabled drivers build config 00:01:03.453 net/dpaa: not in enabled drivers build config 00:01:03.453 net/dpaa2: not in enabled drivers build config 00:01:03.453 net/e1000: not in enabled drivers build config 00:01:03.453 net/ena: not in enabled drivers build config 00:01:03.453 net/enetc: not in enabled drivers build config 00:01:03.453 net/enetfec: not in enabled drivers build config 00:01:03.453 net/enic: not in enabled drivers build config 00:01:03.453 net/failsafe: not in enabled drivers build config 00:01:03.453 net/fm10k: not in enabled drivers build config 00:01:03.453 net/gve: not in enabled drivers build config 00:01:03.453 net/hinic: not in enabled drivers build config 00:01:03.453 net/hns3: not in enabled drivers build config 00:01:03.453 net/iavf: not in enabled drivers build config 00:01:03.453 net/ice: not in enabled drivers build config 00:01:03.453 net/idpf: not in enabled drivers build config 00:01:03.453 net/igc: not in enabled drivers build config 00:01:03.453 net/ionic: not in enabled drivers build config 00:01:03.453 net/ipn3ke: not in enabled drivers build config 00:01:03.453 net/ixgbe: not in enabled drivers build config 00:01:03.453 net/mana: not in enabled drivers build config 00:01:03.453 net/memif: not in enabled drivers build config 00:01:03.453 net/mlx4: not in enabled drivers build config 00:01:03.453 net/mlx5: not in enabled drivers build config 00:01:03.453 net/mvneta: not in enabled drivers build config 00:01:03.453 net/mvpp2: not in enabled drivers build config 00:01:03.453 net/netvsc: not in enabled drivers build config 00:01:03.453 net/nfb: not in enabled drivers build config 00:01:03.453 net/nfp: not in enabled drivers build config 00:01:03.453 net/ngbe: not in enabled drivers build config 00:01:03.453 net/null: not in enabled drivers build config 00:01:03.453 net/octeontx: not in enabled drivers build config 00:01:03.453 net/octeon_ep: not in enabled drivers build config 00:01:03.453 net/pcap: not in enabled drivers build config 00:01:03.453 net/pfe: not in enabled drivers build config 00:01:03.453 net/qede: not in enabled drivers build config 00:01:03.453 net/ring: not in enabled drivers build config 00:01:03.453 net/sfc: not in enabled drivers build config 00:01:03.453 net/softnic: not in enabled drivers build config 00:01:03.453 net/tap: not in enabled drivers build config 00:01:03.453 net/thunderx: not in enabled drivers build config 00:01:03.453 net/txgbe: not in enabled drivers build config 00:01:03.453 net/vdev_netvsc: not in enabled drivers build config 00:01:03.453 net/vhost: not in enabled drivers build config 00:01:03.453 net/virtio: not in enabled drivers build config 00:01:03.453 net/vmxnet3: not in enabled drivers build config 00:01:03.453 raw/cnxk_bphy: not in enabled drivers build config 00:01:03.453 raw/cnxk_gpio: not in enabled drivers build config 00:01:03.453 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:03.453 raw/ifpga: not in enabled drivers build config 00:01:03.453 raw/ntb: not in enabled drivers build config 00:01:03.453 raw/skeleton: not in enabled drivers build config 00:01:03.453 crypto/armv8: not in enabled drivers build config 00:01:03.453 crypto/bcmfs: not in enabled drivers build config 00:01:03.453 crypto/caam_jr: not in enabled drivers build config 00:01:03.453 crypto/ccp: not in enabled drivers build config 00:01:03.453 crypto/cnxk: not in enabled drivers build config 00:01:03.453 crypto/dpaa_sec: not in enabled drivers build config 00:01:03.453 crypto/dpaa2_sec: not in enabled drivers build config 00:01:03.453 crypto/ipsec_mb: not in enabled drivers build config 00:01:03.453 crypto/mlx5: not in enabled drivers build config 00:01:03.453 crypto/mvsam: not in enabled drivers build config 00:01:03.453 crypto/nitrox: not in enabled drivers build config 00:01:03.453 crypto/null: not in enabled drivers build config 00:01:03.453 crypto/octeontx: not in enabled drivers build config 00:01:03.453 crypto/openssl: not in enabled drivers build config 00:01:03.453 crypto/scheduler: not in enabled drivers build config 00:01:03.453 crypto/uadk: not in enabled drivers build config 00:01:03.453 crypto/virtio: not in enabled drivers build config 00:01:03.453 compress/isal: not in enabled drivers build config 00:01:03.453 compress/mlx5: not in enabled drivers build config 00:01:03.453 compress/octeontx: not in enabled drivers build config 00:01:03.453 compress/zlib: not in enabled drivers build config 00:01:03.453 regex/mlx5: not in enabled drivers build config 00:01:03.453 regex/cn9k: not in enabled drivers build config 00:01:03.453 ml/cnxk: not in enabled drivers build config 00:01:03.453 vdpa/ifc: not in enabled drivers build config 00:01:03.453 vdpa/mlx5: not in enabled drivers build config 00:01:03.453 vdpa/nfp: not in enabled drivers build config 00:01:03.453 vdpa/sfc: not in enabled drivers build config 00:01:03.453 event/cnxk: not in enabled drivers build config 00:01:03.453 event/dlb2: not in enabled drivers build config 00:01:03.453 event/dpaa: not in enabled drivers build config 00:01:03.453 event/dpaa2: not in enabled drivers build config 00:01:03.453 event/dsw: not in enabled drivers build config 00:01:03.453 event/opdl: not in enabled drivers build config 00:01:03.453 event/skeleton: not in enabled drivers build config 00:01:03.453 event/sw: not in enabled drivers build config 00:01:03.453 event/octeontx: not in enabled drivers build config 00:01:03.453 baseband/acc: not in enabled drivers build config 00:01:03.453 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:03.453 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:03.453 baseband/la12xx: not in enabled drivers build config 00:01:03.453 baseband/null: not in enabled drivers build config 00:01:03.453 baseband/turbo_sw: not in enabled drivers build config 00:01:03.453 gpu/cuda: not in enabled drivers build config 00:01:03.453 00:01:03.453 00:01:03.453 Build targets in project: 215 00:01:03.453 00:01:03.453 DPDK 23.11.0 00:01:03.453 00:01:03.453 User defined options 00:01:03.453 libdir : lib 00:01:03.453 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:03.453 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:03.453 c_link_args : 00:01:03.453 enable_docs : false 00:01:03.453 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:03.453 enable_kmods : false 00:01:03.453 machine : native 00:01:03.453 tests : false 00:01:03.453 00:01:03.453 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:03.453 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:03.714 21:49:22 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:03.714 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:04.288 [1/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:04.288 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:04.288 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:04.288 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:04.288 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:04.288 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:04.288 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:04.289 [8/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:04.289 [9/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:04.289 [10/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:04.289 [11/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:04.289 [12/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:04.289 [13/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:04.289 [14/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:04.289 [15/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:04.289 [16/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:04.289 [17/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:04.289 [18/705] Linking static target lib/librte_pci.a 00:01:04.289 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:04.289 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:04.289 [21/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:04.289 [22/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:04.289 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:04.289 [24/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:04.289 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:04.289 [26/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:04.289 [27/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:04.559 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:04.559 [29/705] Linking static target lib/librte_log.a 00:01:04.559 [30/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:04.559 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:04.559 [32/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:04.559 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:04.559 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:04.559 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:04.559 [36/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:04.559 [37/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:04.559 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:04.559 [39/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:04.559 [40/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:04.559 [41/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:04.559 [42/705] Linking static target lib/librte_kvargs.a 00:01:04.559 [43/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:04.559 [44/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:04.559 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:04.559 [46/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:04.559 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:04.559 [48/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:04.559 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:04.559 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:04.559 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:04.559 [52/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:04.822 [53/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:04.822 [54/705] Linking static target lib/librte_ring.a 00:01:04.822 [55/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:04.822 [56/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:04.822 [57/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:04.822 [58/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:04.822 [59/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:04.822 [60/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:04.822 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:04.822 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:04.822 [63/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:04.822 [64/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:04.822 [65/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:04.822 [66/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:04.822 [67/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:04.822 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:04.822 [69/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:04.822 [70/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:04.822 [71/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:04.822 [72/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:04.822 [73/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:04.822 [74/705] Linking static target lib/librte_metrics.a 00:01:04.822 [75/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:04.822 [76/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:04.822 [77/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:04.822 [78/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:05.089 [79/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:05.089 [80/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:05.089 [81/705] Linking static target lib/librte_bitratestats.a 00:01:05.089 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:05.089 [83/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:05.089 [84/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:05.089 [85/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:05.089 [86/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:05.089 [87/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:05.089 [88/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:05.089 [89/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:05.089 [90/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:05.090 [91/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:05.090 [92/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:05.090 [93/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:05.090 [94/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:05.090 [95/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:05.090 [96/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:05.090 [97/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:05.348 [98/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.348 [99/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.348 [100/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:05.348 [101/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:05.348 [102/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.348 [103/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:05.348 [104/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:05.348 [105/705] Linking static target lib/librte_timer.a 00:01:05.348 [106/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:05.348 [107/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:05.348 [108/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:05.348 [109/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:05.348 [110/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:05.348 [111/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:05.348 [112/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:05.348 [113/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:05.609 [114/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:05.609 [115/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:05.609 [116/705] Linking static target lib/librte_bbdev.a 00:01:05.609 [117/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:05.609 [118/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:05.609 [119/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.609 [120/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.609 [121/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:05.609 [122/705] Linking static target lib/librte_cfgfile.a 00:01:05.609 [123/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:05.609 [124/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:05.609 [125/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:05.609 [126/705] Linking static target lib/librte_dmadev.a 00:01:05.609 [127/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:05.609 [128/705] Linking target lib/librte_log.so.24.0 00:01:05.609 [129/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:05.609 [130/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:05.609 [131/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:05.609 [132/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:05.609 [133/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:05.609 [134/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:05.609 [135/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:05.609 [136/705] Linking static target lib/librte_gpudev.a 00:01:05.609 [137/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:05.609 [138/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:05.609 [139/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:05.609 [140/705] Linking static target lib/librte_net.a 00:01:05.609 [141/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:05.609 [142/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:05.609 [143/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:05.609 [144/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:05.609 [145/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:05.609 [146/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:05.609 [147/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:05.609 [148/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:05.609 [149/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:05.609 [150/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:05.609 [151/705] Linking static target lib/librte_telemetry.a 00:01:05.609 [152/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:05.609 [153/705] Linking static target lib/librte_latencystats.a 00:01:05.609 [154/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:05.609 [155/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:05.609 [156/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:05.609 [157/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:05.609 [158/705] Linking static target lib/librte_meter.a 00:01:05.609 [159/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:05.609 [160/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:05.609 [161/705] Linking static target lib/librte_mempool.a 00:01:05.609 [162/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:05.609 [163/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:05.873 [164/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:05.873 [165/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.873 [166/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:05.873 [167/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:05.873 [168/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:05.873 [169/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:05.873 [170/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:05.873 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:05.873 [172/705] Linking static target lib/librte_rcu.a 00:01:05.873 [173/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:05.873 [174/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:05.873 [175/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:05.873 [176/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:05.873 [177/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:05.873 [178/705] Linking target lib/librte_kvargs.so.24.0 00:01:05.873 [179/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:05.873 [180/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:05.873 [181/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:05.873 [182/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:05.873 [183/705] Linking static target lib/librte_cmdline.a 00:01:05.873 [184/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:05.873 [185/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:05.873 [186/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:05.873 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:05.873 [188/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:05.873 [189/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:05.873 [190/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.873 [191/705] Linking static target lib/librte_mbuf.a 00:01:05.873 [192/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:05.873 [193/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:05.873 [194/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:05.873 [195/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:05.873 [196/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:05.873 [197/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:05.873 [198/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:05.873 [199/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:05.873 [200/705] Linking static target lib/librte_regexdev.a 00:01:05.873 [201/705] Linking static target lib/librte_mldev.a 00:01:05.873 [202/705] Linking static target lib/librte_ip_frag.a 00:01:05.873 [203/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:05.873 [204/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:05.873 [205/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:05.873 [206/705] Linking static target lib/librte_rawdev.a 00:01:05.873 [207/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:05.873 [208/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:05.873 [209/705] Linking static target lib/librte_jobstats.a 00:01:05.873 [210/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:05.873 [211/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:05.873 [212/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:05.873 [213/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:05.873 [214/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:05.873 [215/705] Linking static target lib/librte_stack.a 00:01:05.873 [216/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:05.873 [217/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:06.133 [218/705] Linking static target lib/librte_gso.a 00:01:06.134 [219/705] Linking static target lib/librte_reorder.a 00:01:06.134 [220/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:06.134 [221/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:06.134 [222/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:06.134 [223/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [224/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:06.134 [225/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:06.134 [226/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [227/705] Linking static target lib/librte_pcapng.a 00:01:06.134 [228/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:06.134 [229/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:06.134 [230/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:06.134 [231/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:06.134 [232/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:06.134 [233/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [234/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:06.134 [235/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:06.134 [236/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:06.134 [237/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:06.134 [238/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:06.134 [239/705] Linking static target lib/librte_security.a 00:01:06.134 [240/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:06.134 [241/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:06.134 [242/705] Linking static target lib/librte_compressdev.a 00:01:06.134 [243/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [244/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [245/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:06.134 [246/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:06.134 [247/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:06.134 [248/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.134 [249/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:06.134 [250/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:06.134 [251/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:06.134 [252/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:06.134 [253/705] Linking static target lib/librte_power.a 00:01:06.134 [254/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:06.134 [255/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:06.134 [256/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:06.134 [257/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:06.134 [258/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:06.134 [259/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:06.134 [260/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:06.134 [261/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:06.395 [262/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:06.395 [263/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:06.395 [264/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:06.395 [265/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:06.395 [266/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.395 [267/705] Linking static target lib/librte_eal.a 00:01:06.395 [268/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:06.395 [269/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:06.395 [270/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:06.395 [271/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:06.395 [272/705] Linking static target lib/librte_dispatcher.a 00:01:06.395 [273/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:06.395 [274/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:06.395 [275/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:06.395 [276/705] Linking static target lib/librte_rib.a 00:01:06.395 [277/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:06.395 [278/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:06.395 [279/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:06.395 [280/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.395 [281/705] Linking static target lib/librte_lpm.a 00:01:06.395 [282/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:06.395 [283/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:06.395 [284/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:06.395 [285/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:06.395 [286/705] Linking static target lib/librte_gro.a 00:01:06.395 [287/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:06.395 [288/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:06.395 [289/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:06.395 [290/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.395 [291/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:06.395 [292/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:06.395 [293/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:06.395 [294/705] Linking static target lib/librte_distributor.a 00:01:06.395 [295/705] Linking target lib/librte_telemetry.so.24.0 00:01:06.395 [296/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.395 [297/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:06.395 [298/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:06.656 [299/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:06.656 [300/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:06.656 [301/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.656 [302/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:06.656 [303/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.656 [304/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:06.656 [305/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:06.656 [306/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:06.656 [307/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:06.656 [308/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.656 [309/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:06.656 [310/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:06.656 [311/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:06.656 [312/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:06.656 [313/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:06.656 [314/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:06.656 [315/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:06.656 [316/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:06.656 [317/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:06.656 [318/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:06.656 [319/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:06.656 [320/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:06.656 [321/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:06.656 [322/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:06.656 [323/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.656 [324/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:06.656 [325/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:06.656 [326/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:06.656 [327/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:06.656 [328/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [329/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:06.917 [330/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:06.917 [331/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:06.917 [332/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:06.917 [333/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [334/705] Linking static target lib/librte_bpf.a 00:01:06.917 [335/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:06.917 [336/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:06.917 [337/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:06.917 [338/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:06.917 [339/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:06.917 [340/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:06.917 [341/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:06.917 [342/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:06.917 [343/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [344/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:06.917 [345/705] Linking static target lib/librte_fib.a 00:01:06.917 [346/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [347/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:06.917 [348/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:06.917 [349/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [350/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:06.917 [351/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:06.917 [352/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:06.917 [353/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:06.917 [354/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [355/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:06.917 [356/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:06.917 [357/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.917 [358/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:06.917 [359/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:06.917 [360/705] Linking static target lib/librte_graph.a 00:01:06.917 [361/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:06.917 [362/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:06.917 [363/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:06.917 [364/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:06.917 [365/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:06.917 [366/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.179 [367/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:07.179 [368/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:07.179 [369/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:07.179 [370/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:07.179 [371/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:07.179 [372/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:07.179 [373/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.179 [374/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:07.179 [375/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:07.179 [376/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:07.179 [377/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:07.179 [378/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:07.179 [379/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:07.179 [380/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:07.179 [381/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:07.179 [382/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:07.179 [383/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:07.179 [384/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.179 [385/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:07.179 [386/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:07.179 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:07.179 [388/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:07.179 [389/705] Linking static target lib/librte_efd.a 00:01:07.179 [390/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:07.179 [391/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:07.179 [392/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.179 [393/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.179 [394/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:07.179 [395/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:07.179 [396/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:07.179 [397/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:07.179 [398/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:07.179 [399/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:07.179 [400/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:07.442 [401/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:07.442 [402/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:07.442 [403/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:07.442 [404/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:07.442 [405/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.442 [406/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:07.442 [407/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:07.442 [408/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:07.442 [409/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.442 [410/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:07.442 [411/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:07.442 [412/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:07.442 [413/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:07.442 [414/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:07.442 [415/705] Linking static target lib/librte_pdump.a 00:01:07.442 [416/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:07.442 [417/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:07.442 [418/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:07.442 [419/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.442 [420/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:07.442 [421/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:07.442 [422/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:07.442 [423/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:07.442 [424/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:07.442 [425/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:07.442 [426/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:07.442 [427/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:07.442 [428/705] Linking static target drivers/librte_bus_vdev.a 00:01:07.442 [429/705] Linking static target drivers/librte_bus_pci.a 00:01:07.442 [430/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:07.442 [431/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.442 [432/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:07.442 [433/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.442 [434/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:07.442 [435/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:07.442 [436/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:07.442 [437/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:07.442 [438/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:07.442 [439/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:07.701 [440/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:07.701 [441/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:07.701 [442/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:07.701 [443/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:07.701 [444/705] Linking static target lib/librte_table.a 00:01:07.701 [445/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:07.701 [446/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:07.701 [447/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:07.701 [448/705] Linking static target lib/librte_ipsec.a 00:01:07.701 [449/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:07.701 [450/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:07.701 [451/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:07.701 [452/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:07.701 [453/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:07.701 [454/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:07.701 [455/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:07.701 [456/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:07.701 [457/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:07.701 [458/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:07.701 [459/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:07.701 [460/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:07.701 [461/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:07.701 [462/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:07.701 [463/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:07.701 [464/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.701 [465/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:07.701 [466/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:07.701 [467/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:07.701 [468/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.701 [469/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:07.701 [470/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:07.701 [471/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:07.701 [472/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:07.701 [473/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:07.701 [474/705] Linking static target lib/librte_pdcp.a 00:01:07.701 [475/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:07.701 [476/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:07.701 [477/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:07.701 [478/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:07.701 [479/705] Linking static target lib/librte_sched.a 00:01:07.701 [480/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:07.701 [481/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:07.701 [482/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:07.701 [483/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.701 [484/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:07.701 [485/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:07.701 [486/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:07.701 [487/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.701 [488/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:07.701 [489/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:07.960 [490/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:07.960 [491/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:07.960 [492/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:07.960 [493/705] Linking static target drivers/librte_mempool_ring.a 00:01:07.960 [494/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:07.960 [495/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:07.960 [496/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:07.960 [497/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:07.960 [498/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:07.960 [499/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:07.960 [500/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:07.960 [501/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:07.960 [502/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.960 [503/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:07.960 [504/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:07.960 [505/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:07.960 [506/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:07.960 [507/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:07.960 [508/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:07.960 [509/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:07.960 [510/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:07.960 [511/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:07.960 [512/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:07.960 [513/705] Linking static target lib/librte_cryptodev.a 00:01:07.960 [514/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:07.960 [515/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:07.960 [516/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:07.960 [517/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:07.960 [518/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:07.960 [519/705] Linking static target lib/librte_node.a 00:01:07.960 [520/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:07.960 [521/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:07.960 [522/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:07.960 [523/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:07.960 [524/705] Linking static target lib/librte_member.a 00:01:07.960 [525/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.960 [526/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:08.221 [527/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:08.221 [528/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:08.221 [529/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:08.221 [530/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:08.221 [531/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:08.221 [532/705] Linking static target lib/librte_port.a 00:01:08.221 [533/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:08.221 [534/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:08.221 [535/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.221 [536/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:08.221 [537/705] Linking static target lib/librte_eventdev.a 00:01:08.221 [538/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:08.221 [539/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:08.221 [540/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:08.221 [541/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:08.221 [542/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:08.221 [543/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:08.221 [544/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:08.221 [545/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:08.221 [546/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.221 [547/705] Linking static target lib/acl/libavx2_tmp.a 00:01:08.221 [548/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:08.482 [549/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:08.482 [550/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.482 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:08.482 [552/705] Linking static target lib/librte_hash.a 00:01:08.482 [553/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:08.482 [554/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:08.482 [555/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:08.482 [556/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:08.482 [557/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.482 [558/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.482 [559/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:08.482 [560/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:08.482 [561/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.482 [562/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:08.482 [563/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:08.482 [564/705] Linking static target lib/librte_acl.a 00:01:08.482 [565/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:08.743 [566/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:09.003 [567/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:09.003 [568/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.003 [569/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.003 [570/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:09.003 [571/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:09.003 [572/705] Linking static target lib/librte_ethdev.a 00:01:09.003 [573/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:09.263 [574/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.263 [575/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:09.523 [576/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:09.523 [577/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:10.093 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:10.093 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:10.093 [580/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.093 [581/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:10.093 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:10.353 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:10.353 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:10.353 [585/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:10.353 [586/705] Linking static target drivers/librte_net_i40e.a 00:01:11.295 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:11.295 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.866 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:11.866 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.074 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:16.074 [592/705] Linking static target lib/librte_pipeline.a 00:01:17.016 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:17.016 [594/705] Linking static target lib/librte_vhost.a 00:01:17.277 [595/705] Linking target app/dpdk-test-compress-perf 00:01:17.277 [596/705] Linking target app/dpdk-dumpcap 00:01:17.277 [597/705] Linking target app/dpdk-test-fib 00:01:17.277 [598/705] Linking target app/dpdk-pdump 00:01:17.277 [599/705] Linking target app/dpdk-test-regex 00:01:17.277 [600/705] Linking target app/dpdk-test-bbdev 00:01:17.277 [601/705] Linking target app/dpdk-test-dma-perf 00:01:17.277 [602/705] Linking target app/dpdk-test-crypto-perf 00:01:17.277 [603/705] Linking target app/dpdk-test-sad 00:01:17.277 [604/705] Linking target app/dpdk-testpmd 00:01:17.277 [605/705] Linking target app/dpdk-test-cmdline 00:01:17.277 [606/705] Linking target app/dpdk-proc-info 00:01:17.277 [607/705] Linking target app/dpdk-graph 00:01:17.277 [608/705] Linking target app/dpdk-test-acl 00:01:17.277 [609/705] Linking target app/dpdk-test-gpudev 00:01:17.277 [610/705] Linking target app/dpdk-test-flow-perf 00:01:17.277 [611/705] Linking target app/dpdk-test-pipeline 00:01:17.277 [612/705] Linking target app/dpdk-test-security-perf 00:01:17.277 [613/705] Linking target app/dpdk-test-mldev 00:01:17.277 [614/705] Linking target app/dpdk-test-eventdev 00:01:17.538 [615/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.538 [616/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.799 [617/705] Linking target lib/librte_eal.so.24.0 00:01:17.799 [618/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:18.061 [619/705] Linking target lib/librte_ring.so.24.0 00:01:18.061 [620/705] Linking target lib/librte_pci.so.24.0 00:01:18.061 [621/705] Linking target lib/librte_meter.so.24.0 00:01:18.061 [622/705] Linking target lib/librte_timer.so.24.0 00:01:18.061 [623/705] Linking target lib/librte_dmadev.so.24.0 00:01:18.061 [624/705] Linking target lib/librte_cfgfile.so.24.0 00:01:18.061 [625/705] Linking target lib/librte_jobstats.so.24.0 00:01:18.061 [626/705] Linking target lib/librte_stack.so.24.0 00:01:18.061 [627/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:18.061 [628/705] Linking target lib/librte_acl.so.24.0 00:01:18.061 [629/705] Linking target lib/librte_rawdev.so.24.0 00:01:18.061 [630/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:18.061 [631/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:18.061 [632/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:18.061 [633/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:18.061 [634/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:18.061 [635/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:18.061 [636/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:18.061 [637/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:18.061 [638/705] Linking target lib/librte_rcu.so.24.0 00:01:18.061 [639/705] Linking target lib/librte_mempool.so.24.0 00:01:18.322 [640/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:18.322 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:18.322 [642/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:18.322 [643/705] Linking target lib/librte_rib.so.24.0 00:01:18.322 [644/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:18.322 [645/705] Linking target lib/librte_mbuf.so.24.0 00:01:18.584 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:18.584 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:18.584 [648/705] Linking target lib/librte_bbdev.so.24.0 00:01:18.584 [649/705] Linking target lib/librte_gpudev.so.24.0 00:01:18.584 [650/705] Linking target lib/librte_net.so.24.0 00:01:18.584 [651/705] Linking target lib/librte_distributor.so.24.0 00:01:18.584 [652/705] Linking target lib/librte_compressdev.so.24.0 00:01:18.584 [653/705] Linking target lib/librte_mldev.so.24.0 00:01:18.584 [654/705] Linking target lib/librte_cryptodev.so.24.0 00:01:18.584 [655/705] Linking target lib/librte_regexdev.so.24.0 00:01:18.584 [656/705] Linking target lib/librte_reorder.so.24.0 00:01:18.584 [657/705] Linking target lib/librte_fib.so.24.0 00:01:18.584 [658/705] Linking target lib/librte_sched.so.24.0 00:01:18.584 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:18.584 [660/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:18.584 [661/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:18.584 [662/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:18.845 [663/705] Linking target lib/librte_cmdline.so.24.0 00:01:18.845 [664/705] Linking target lib/librte_hash.so.24.0 00:01:18.845 [665/705] Linking target lib/librte_security.so.24.0 00:01:18.845 [666/705] Linking target lib/librte_ethdev.so.24.0 00:01:18.845 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:18.845 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:18.845 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:18.845 [670/705] Linking target lib/librte_lpm.so.24.0 00:01:18.845 [671/705] Linking target lib/librte_efd.so.24.0 00:01:18.845 [672/705] Linking target lib/librte_member.so.24.0 00:01:18.845 [673/705] Linking target lib/librte_ipsec.so.24.0 00:01:18.845 [674/705] Linking target lib/librte_pdcp.so.24.0 00:01:18.845 [675/705] Linking target lib/librte_metrics.so.24.0 00:01:18.845 [676/705] Linking target lib/librte_ip_frag.so.24.0 00:01:18.845 [677/705] Linking target lib/librte_bpf.so.24.0 00:01:18.845 [678/705] Linking target lib/librte_pcapng.so.24.0 00:01:18.845 [679/705] Linking target lib/librte_gso.so.24.0 00:01:18.845 [680/705] Linking target lib/librte_gro.so.24.0 00:01:18.845 [681/705] Linking target lib/librte_power.so.24.0 00:01:19.106 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:19.106 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:19.106 [684/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.106 [685/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:19.106 [686/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:19.106 [687/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:19.106 [688/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:19.106 [689/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:19.106 [690/705] Linking target lib/librte_vhost.so.24.0 00:01:19.106 [691/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:19.106 [692/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:19.106 [693/705] Linking target lib/librte_latencystats.so.24.0 00:01:19.106 [694/705] Linking target lib/librte_bitratestats.so.24.0 00:01:19.106 [695/705] Linking target lib/librte_pdump.so.24.0 00:01:19.106 [696/705] Linking target lib/librte_graph.so.24.0 00:01:19.106 [697/705] Linking target lib/librte_dispatcher.so.24.0 00:01:19.106 [698/705] Linking target lib/librte_port.so.24.0 00:01:19.367 [699/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:19.367 [700/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:19.367 [701/705] Linking target lib/librte_node.so.24.0 00:01:19.367 [702/705] Linking target lib/librte_table.so.24.0 00:01:19.628 [703/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:21.543 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.543 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:21.543 21:49:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:01:21.543 21:49:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:21.543 21:49:39 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:21.543 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:21.543 [0/1] Installing files. 00:01:21.808 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.808 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.809 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:21.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:21.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:21.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:21.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:21.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:21.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:21.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:21.814 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:21.814 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.080 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:22.081 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:22.081 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:22.081 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.081 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:22.081 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.081 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.082 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.083 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:22.084 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:22.084 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:22.084 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:22.084 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:22.084 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:22.085 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:22.085 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:22.085 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:22.085 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:22.085 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:22.085 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:22.085 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:22.085 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:22.085 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:22.085 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:22.085 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:22.085 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:22.085 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:22.085 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:22.085 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:22.085 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:22.085 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:22.085 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:22.085 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:22.085 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:22.085 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:22.085 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:22.085 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:22.085 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:22.085 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:22.085 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:22.085 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:22.085 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:22.085 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:22.085 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:22.085 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:22.085 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:22.085 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:22.085 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:22.085 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:22.085 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:22.085 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:22.085 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:22.085 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:22.085 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:22.085 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:22.085 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:22.085 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:22.085 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:22.085 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:22.085 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:22.085 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:22.085 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:22.085 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:22.085 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:22.085 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:22.085 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:22.085 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:22.085 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:22.085 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:22.085 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:22.085 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:22.085 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:22.085 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:22.085 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:22.085 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:22.085 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:22.085 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:22.085 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:22.085 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:22.085 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:22.085 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:22.085 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:22.085 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:22.085 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:22.085 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:22.085 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:22.085 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:22.085 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:22.085 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:22.085 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:22.085 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:22.085 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:22.085 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:22.085 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:22.085 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:22.085 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:22.085 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:22.085 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:22.085 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:22.085 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:22.085 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:22.085 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:22.086 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:22.086 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:22.086 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:22.086 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:22.086 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:22.086 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:22.086 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:22.086 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:22.086 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:22.086 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:22.086 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:22.086 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:22.086 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:22.086 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:22.086 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:22.086 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:22.086 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:22.086 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:22.086 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:22.086 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:22.086 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:22.086 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:22.086 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:22.086 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:22.086 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:22.086 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:22.086 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:22.086 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:22.086 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:22.086 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:22.086 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:22.086 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:22.086 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:22.086 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:22.086 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:22.086 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:22.086 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:22.086 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:22.086 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:22.086 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:22.086 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:22.086 21:49:40 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:01:22.086 21:49:40 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.086 00:01:22.086 real 0m25.549s 00:01:22.086 user 7m25.484s 00:01:22.086 sys 3m49.327s 00:01:22.086 21:49:40 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:22.086 21:49:40 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:22.086 ************************************ 00:01:22.086 END TEST build_native_dpdk 00:01:22.086 ************************************ 00:01:22.086 21:49:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.086 21:49:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.086 21:49:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:22.348 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:22.348 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:22.348 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:22.609 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:22.870 Using 'verbs' RDMA provider 00:01:38.724 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:51.133 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:51.705 Creating mk/config.mk...done. 00:01:51.705 Creating mk/cc.flags.mk...done. 00:01:51.705 Type 'make' to build. 00:01:51.705 21:50:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:51.705 21:50:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:51.705 21:50:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:51.705 21:50:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.705 ************************************ 00:01:51.705 START TEST make 00:01:51.705 ************************************ 00:01:51.705 21:50:09 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:51.966 make[1]: Nothing to be done for 'all'. 00:01:53.882 The Meson build system 00:01:53.882 Version: 1.5.0 00:01:53.882 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:53.882 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.882 Build type: native build 00:01:53.882 Project name: libvfio-user 00:01:53.882 Project version: 0.0.1 00:01:53.882 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.882 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:53.882 Host machine cpu family: x86_64 00:01:53.882 Host machine cpu: x86_64 00:01:53.882 Run-time dependency threads found: YES 00:01:53.882 Library dl found: YES 00:01:53.882 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.882 Run-time dependency json-c found: YES 0.17 00:01:53.882 Run-time dependency cmocka found: YES 1.1.7 00:01:53.882 Program pytest-3 found: NO 00:01:53.882 Program flake8 found: NO 00:01:53.882 Program misspell-fixer found: NO 00:01:53.882 Program restructuredtext-lint found: NO 00:01:53.882 Program valgrind found: YES (/usr/bin/valgrind) 00:01:53.882 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.882 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.882 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.882 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.882 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:53.882 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:53.882 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.882 Build targets in project: 8 00:01:53.882 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:53.882 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:53.882 00:01:53.882 libvfio-user 0.0.1 00:01:53.882 00:01:53.882 User defined options 00:01:53.882 buildtype : debug 00:01:53.882 default_library: shared 00:01:53.882 libdir : /usr/local/lib 00:01:53.882 00:01:53.882 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.882 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.143 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:54.143 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:54.143 [3/37] Compiling C object samples/null.p/null.c.o 00:01:54.143 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:54.143 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:54.143 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:54.143 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:54.143 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:54.143 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:54.143 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:54.143 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:54.143 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:54.143 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:54.143 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:54.143 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:54.143 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:54.143 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:54.143 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:54.143 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:54.143 [20/37] Compiling C object samples/server.p/server.c.o 00:01:54.143 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:54.143 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:54.143 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:54.143 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:54.143 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:54.143 [26/37] Compiling C object samples/client.p/client.c.o 00:01:54.143 [27/37] Linking target samples/client 00:01:54.143 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:54.143 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:54.405 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:54.405 [31/37] Linking target test/unit_tests 00:01:54.405 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.405 [33/37] Linking target samples/server 00:01:54.405 [34/37] Linking target samples/gpio-pci-idio-16 00:01:54.405 [35/37] Linking target samples/null 00:01:54.405 [36/37] Linking target samples/lspci 00:01:54.405 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:54.405 INFO: autodetecting backend as ninja 00:01:54.405 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.666 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.927 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.927 ninja: no work to do. 00:02:21.509 CC lib/log/log.o 00:02:21.509 CC lib/log/log_flags.o 00:02:21.509 CC lib/log/log_deprecated.o 00:02:21.509 CC lib/ut_mock/mock.o 00:02:21.509 CC lib/ut/ut.o 00:02:21.509 LIB libspdk_log.a 00:02:21.509 LIB libspdk_ut_mock.a 00:02:21.509 LIB libspdk_ut.a 00:02:21.509 SO libspdk_log.so.7.0 00:02:21.509 SO libspdk_ut_mock.so.6.0 00:02:21.509 SO libspdk_ut.so.2.0 00:02:21.509 SYMLINK libspdk_log.so 00:02:21.509 SYMLINK libspdk_ut_mock.so 00:02:21.509 SYMLINK libspdk_ut.so 00:02:21.509 CXX lib/trace_parser/trace.o 00:02:21.510 CC lib/util/base64.o 00:02:21.510 CC lib/util/bit_array.o 00:02:21.510 CC lib/ioat/ioat.o 00:02:21.510 CC lib/util/cpuset.o 00:02:21.510 CC lib/dma/dma.o 00:02:21.510 CC lib/util/crc16.o 00:02:21.510 CC lib/util/crc32.o 00:02:21.510 CC lib/util/crc32c.o 00:02:21.510 CC lib/util/crc32_ieee.o 00:02:21.510 CC lib/util/crc64.o 00:02:21.510 CC lib/util/dif.o 00:02:21.510 CC lib/util/fd.o 00:02:21.510 CC lib/util/fd_group.o 00:02:21.510 CC lib/util/file.o 00:02:21.510 CC lib/util/hexlify.o 00:02:21.510 CC lib/util/iov.o 00:02:21.510 CC lib/util/math.o 00:02:21.510 CC lib/util/net.o 00:02:21.510 CC lib/util/pipe.o 00:02:21.510 CC lib/util/strerror_tls.o 00:02:21.510 CC lib/util/string.o 00:02:21.510 CC lib/util/uuid.o 00:02:21.510 CC lib/util/xor.o 00:02:21.510 CC lib/util/zipf.o 00:02:21.510 CC lib/util/md5.o 00:02:21.510 CC lib/vfio_user/host/vfio_user.o 00:02:21.510 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.510 LIB libspdk_dma.a 00:02:21.510 SO libspdk_dma.so.5.0 00:02:21.510 LIB libspdk_ioat.a 00:02:21.510 SYMLINK libspdk_dma.so 00:02:21.510 SO libspdk_ioat.so.7.0 00:02:21.510 SYMLINK libspdk_ioat.so 00:02:21.510 LIB libspdk_vfio_user.a 00:02:21.510 SO libspdk_vfio_user.so.5.0 00:02:21.510 LIB libspdk_util.a 00:02:21.510 SYMLINK libspdk_vfio_user.so 00:02:21.510 SO libspdk_util.so.10.0 00:02:21.510 SYMLINK libspdk_util.so 00:02:21.510 LIB libspdk_trace_parser.a 00:02:21.510 SO libspdk_trace_parser.so.6.0 00:02:21.510 SYMLINK libspdk_trace_parser.so 00:02:21.510 CC lib/json/json_parse.o 00:02:21.510 CC lib/conf/conf.o 00:02:21.510 CC lib/json/json_util.o 00:02:21.510 CC lib/json/json_write.o 00:02:21.510 CC lib/rdma_provider/common.o 00:02:21.510 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.510 CC lib/env_dpdk/env.o 00:02:21.510 CC lib/vmd/vmd.o 00:02:21.510 CC lib/env_dpdk/memory.o 00:02:21.510 CC lib/rdma_utils/rdma_utils.o 00:02:21.510 CC lib/vmd/led.o 00:02:21.510 CC lib/env_dpdk/pci.o 00:02:21.510 CC lib/idxd/idxd.o 00:02:21.510 CC lib/env_dpdk/init.o 00:02:21.510 CC lib/idxd/idxd_user.o 00:02:21.510 CC lib/env_dpdk/threads.o 00:02:21.510 CC lib/idxd/idxd_kernel.o 00:02:21.510 CC lib/env_dpdk/pci_ioat.o 00:02:21.510 CC lib/env_dpdk/pci_virtio.o 00:02:21.510 CC lib/env_dpdk/pci_vmd.o 00:02:21.510 CC lib/env_dpdk/pci_idxd.o 00:02:21.510 CC lib/env_dpdk/pci_event.o 00:02:21.510 CC lib/env_dpdk/sigbus_handler.o 00:02:21.510 CC lib/env_dpdk/pci_dpdk.o 00:02:21.510 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.510 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.510 LIB libspdk_rdma_provider.a 00:02:21.510 SO libspdk_rdma_provider.so.6.0 00:02:21.510 LIB libspdk_conf.a 00:02:21.510 SO libspdk_conf.so.6.0 00:02:21.510 LIB libspdk_json.a 00:02:21.510 LIB libspdk_rdma_utils.a 00:02:21.510 SYMLINK libspdk_rdma_provider.so 00:02:21.510 SO libspdk_rdma_utils.so.1.0 00:02:21.510 SO libspdk_json.so.6.0 00:02:21.510 SYMLINK libspdk_conf.so 00:02:21.510 SYMLINK libspdk_rdma_utils.so 00:02:21.510 SYMLINK libspdk_json.so 00:02:21.510 LIB libspdk_idxd.a 00:02:21.510 SO libspdk_idxd.so.12.1 00:02:21.510 LIB libspdk_vmd.a 00:02:21.510 SO libspdk_vmd.so.6.0 00:02:21.510 SYMLINK libspdk_idxd.so 00:02:21.510 SYMLINK libspdk_vmd.so 00:02:21.510 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.510 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.510 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.510 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.510 LIB libspdk_jsonrpc.a 00:02:21.510 SO libspdk_jsonrpc.so.6.0 00:02:21.510 SYMLINK libspdk_jsonrpc.so 00:02:21.510 LIB libspdk_env_dpdk.a 00:02:21.510 SO libspdk_env_dpdk.so.15.0 00:02:21.510 SYMLINK libspdk_env_dpdk.so 00:02:21.510 CC lib/rpc/rpc.o 00:02:21.510 LIB libspdk_rpc.a 00:02:21.510 SO libspdk_rpc.so.6.0 00:02:21.510 SYMLINK libspdk_rpc.so 00:02:21.510 CC lib/trace/trace.o 00:02:21.510 CC lib/notify/notify.o 00:02:21.510 CC lib/trace/trace_flags.o 00:02:21.510 CC lib/notify/notify_rpc.o 00:02:21.510 CC lib/keyring/keyring.o 00:02:21.510 CC lib/trace/trace_rpc.o 00:02:21.510 CC lib/keyring/keyring_rpc.o 00:02:21.510 LIB libspdk_notify.a 00:02:21.772 SO libspdk_notify.so.6.0 00:02:21.772 LIB libspdk_trace.a 00:02:21.772 LIB libspdk_keyring.a 00:02:21.772 SO libspdk_keyring.so.2.0 00:02:21.772 SO libspdk_trace.so.11.0 00:02:21.772 SYMLINK libspdk_notify.so 00:02:21.772 SYMLINK libspdk_keyring.so 00:02:21.772 SYMLINK libspdk_trace.so 00:02:22.034 CC lib/thread/thread.o 00:02:22.034 CC lib/thread/iobuf.o 00:02:22.034 CC lib/sock/sock.o 00:02:22.034 CC lib/sock/sock_rpc.o 00:02:22.607 LIB libspdk_sock.a 00:02:22.607 SO libspdk_sock.so.10.0 00:02:22.607 SYMLINK libspdk_sock.so 00:02:22.867 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.127 CC lib/nvme/nvme_ctrlr.o 00:02:23.127 CC lib/nvme/nvme_fabric.o 00:02:23.127 CC lib/nvme/nvme_ns_cmd.o 00:02:23.127 CC lib/nvme/nvme_ns.o 00:02:23.127 CC lib/nvme/nvme_pcie_common.o 00:02:23.127 CC lib/nvme/nvme_pcie.o 00:02:23.127 CC lib/nvme/nvme_qpair.o 00:02:23.127 CC lib/nvme/nvme.o 00:02:23.127 CC lib/nvme/nvme_quirks.o 00:02:23.127 CC lib/nvme/nvme_transport.o 00:02:23.127 CC lib/nvme/nvme_discovery.o 00:02:23.127 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.127 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.127 CC lib/nvme/nvme_tcp.o 00:02:23.127 CC lib/nvme/nvme_opal.o 00:02:23.127 CC lib/nvme/nvme_io_msg.o 00:02:23.127 CC lib/nvme/nvme_poll_group.o 00:02:23.127 CC lib/nvme/nvme_zns.o 00:02:23.127 CC lib/nvme/nvme_stubs.o 00:02:23.127 CC lib/nvme/nvme_auth.o 00:02:23.127 CC lib/nvme/nvme_cuse.o 00:02:23.127 CC lib/nvme/nvme_vfio_user.o 00:02:23.127 CC lib/nvme/nvme_rdma.o 00:02:23.387 LIB libspdk_thread.a 00:02:23.387 SO libspdk_thread.so.10.1 00:02:23.648 SYMLINK libspdk_thread.so 00:02:23.909 CC lib/accel/accel.o 00:02:23.909 CC lib/accel/accel_rpc.o 00:02:23.909 CC lib/accel/accel_sw.o 00:02:23.909 CC lib/init/json_config.o 00:02:23.909 CC lib/init/subsystem.o 00:02:23.909 CC lib/fsdev/fsdev.o 00:02:23.909 CC lib/init/subsystem_rpc.o 00:02:23.909 CC lib/fsdev/fsdev_io.o 00:02:23.909 CC lib/virtio/virtio.o 00:02:23.909 CC lib/init/rpc.o 00:02:23.909 CC lib/fsdev/fsdev_rpc.o 00:02:23.909 CC lib/virtio/virtio_vhost_user.o 00:02:23.909 CC lib/virtio/virtio_vfio_user.o 00:02:23.909 CC lib/virtio/virtio_pci.o 00:02:23.909 CC lib/vfu_tgt/tgt_endpoint.o 00:02:23.909 CC lib/vfu_tgt/tgt_rpc.o 00:02:23.909 CC lib/blob/blobstore.o 00:02:23.909 CC lib/blob/request.o 00:02:23.909 CC lib/blob/zeroes.o 00:02:23.909 CC lib/blob/blob_bs_dev.o 00:02:24.170 LIB libspdk_init.a 00:02:24.170 SO libspdk_init.so.6.0 00:02:24.170 LIB libspdk_vfu_tgt.a 00:02:24.170 LIB libspdk_virtio.a 00:02:24.170 SYMLINK libspdk_init.so 00:02:24.431 SO libspdk_vfu_tgt.so.3.0 00:02:24.431 SO libspdk_virtio.so.7.0 00:02:24.431 SYMLINK libspdk_vfu_tgt.so 00:02:24.431 SYMLINK libspdk_virtio.so 00:02:24.431 LIB libspdk_fsdev.a 00:02:24.691 SO libspdk_fsdev.so.1.0 00:02:24.691 CC lib/event/app.o 00:02:24.691 CC lib/event/reactor.o 00:02:24.691 CC lib/event/log_rpc.o 00:02:24.691 CC lib/event/app_rpc.o 00:02:24.691 CC lib/event/scheduler_static.o 00:02:24.691 SYMLINK libspdk_fsdev.so 00:02:24.952 LIB libspdk_accel.a 00:02:24.952 SO libspdk_accel.so.16.0 00:02:24.952 LIB libspdk_nvme.a 00:02:24.952 SYMLINK libspdk_accel.so 00:02:24.952 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:24.952 LIB libspdk_event.a 00:02:24.952 SO libspdk_nvme.so.14.0 00:02:25.213 SO libspdk_event.so.14.0 00:02:25.213 SYMLINK libspdk_event.so 00:02:25.213 SYMLINK libspdk_nvme.so 00:02:25.475 CC lib/bdev/bdev.o 00:02:25.475 CC lib/bdev/bdev_rpc.o 00:02:25.475 CC lib/bdev/bdev_zone.o 00:02:25.475 CC lib/bdev/part.o 00:02:25.475 CC lib/bdev/scsi_nvme.o 00:02:25.735 LIB libspdk_fuse_dispatcher.a 00:02:25.735 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.735 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.676 LIB libspdk_blob.a 00:02:26.676 SO libspdk_blob.so.11.0 00:02:26.676 SYMLINK libspdk_blob.so 00:02:27.247 CC lib/lvol/lvol.o 00:02:27.247 CC lib/blobfs/blobfs.o 00:02:27.247 CC lib/blobfs/tree.o 00:02:27.819 LIB libspdk_bdev.a 00:02:27.819 SO libspdk_bdev.so.16.0 00:02:27.819 SYMLINK libspdk_bdev.so 00:02:27.819 LIB libspdk_blobfs.a 00:02:27.819 SO libspdk_blobfs.so.10.0 00:02:28.080 LIB libspdk_lvol.a 00:02:28.080 SO libspdk_lvol.so.10.0 00:02:28.080 SYMLINK libspdk_blobfs.so 00:02:28.080 SYMLINK libspdk_lvol.so 00:02:28.080 CC lib/nvmf/ctrlr.o 00:02:28.080 CC lib/nvmf/ctrlr_discovery.o 00:02:28.080 CC lib/nvmf/ctrlr_bdev.o 00:02:28.080 CC lib/nvmf/subsystem.o 00:02:28.080 CC lib/nvmf/nvmf.o 00:02:28.080 CC lib/nvmf/nvmf_rpc.o 00:02:28.080 CC lib/nvmf/transport.o 00:02:28.080 CC lib/nbd/nbd.o 00:02:28.080 CC lib/nvmf/tcp.o 00:02:28.080 CC lib/nbd/nbd_rpc.o 00:02:28.080 CC lib/nvmf/stubs.o 00:02:28.080 CC lib/nvmf/mdns_server.o 00:02:28.080 CC lib/nvmf/rdma.o 00:02:28.080 CC lib/nvmf/vfio_user.o 00:02:28.080 CC lib/nvmf/auth.o 00:02:28.080 CC lib/ftl/ftl_core.o 00:02:28.080 CC lib/scsi/dev.o 00:02:28.080 CC lib/ublk/ublk.o 00:02:28.080 CC lib/ftl/ftl_init.o 00:02:28.080 CC lib/scsi/lun.o 00:02:28.080 CC lib/ublk/ublk_rpc.o 00:02:28.080 CC lib/ftl/ftl_layout.o 00:02:28.080 CC lib/scsi/port.o 00:02:28.080 CC lib/scsi/scsi.o 00:02:28.080 CC lib/ftl/ftl_debug.o 00:02:28.339 CC lib/scsi/scsi_bdev.o 00:02:28.339 CC lib/ftl/ftl_io.o 00:02:28.339 CC lib/ftl/ftl_sb.o 00:02:28.339 CC lib/scsi/scsi_pr.o 00:02:28.339 CC lib/scsi/scsi_rpc.o 00:02:28.339 CC lib/ftl/ftl_l2p.o 00:02:28.339 CC lib/scsi/task.o 00:02:28.339 CC lib/ftl/ftl_l2p_flat.o 00:02:28.339 CC lib/ftl/ftl_nv_cache.o 00:02:28.339 CC lib/ftl/ftl_band.o 00:02:28.339 CC lib/ftl/ftl_band_ops.o 00:02:28.339 CC lib/ftl/ftl_writer.o 00:02:28.339 CC lib/ftl/ftl_rq.o 00:02:28.339 CC lib/ftl/ftl_reloc.o 00:02:28.339 CC lib/ftl/ftl_l2p_cache.o 00:02:28.339 CC lib/ftl/ftl_p2l.o 00:02:28.339 CC lib/ftl/ftl_p2l_log.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.339 CC lib/ftl/utils/ftl_conf.o 00:02:28.339 CC lib/ftl/utils/ftl_md.o 00:02:28.339 CC lib/ftl/utils/ftl_mempool.o 00:02:28.339 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.339 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.340 CC lib/ftl/utils/ftl_property.o 00:02:28.340 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.340 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.340 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.340 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.340 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.340 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.340 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.340 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.340 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.340 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.340 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:28.340 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:28.340 CC lib/ftl/base/ftl_base_dev.o 00:02:28.340 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.340 CC lib/ftl/ftl_trace.o 00:02:28.340 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.912 LIB libspdk_ublk.a 00:02:28.912 LIB libspdk_nbd.a 00:02:28.912 SO libspdk_ublk.so.3.0 00:02:28.912 SO libspdk_nbd.so.7.0 00:02:28.912 LIB libspdk_scsi.a 00:02:28.912 SYMLINK libspdk_ublk.so 00:02:28.912 SYMLINK libspdk_nbd.so 00:02:28.912 SO libspdk_scsi.so.9.0 00:02:29.173 SYMLINK libspdk_scsi.so 00:02:29.435 LIB libspdk_ftl.a 00:02:29.435 CC lib/vhost/vhost.o 00:02:29.435 CC lib/iscsi/conn.o 00:02:29.435 CC lib/vhost/vhost_rpc.o 00:02:29.435 CC lib/iscsi/init_grp.o 00:02:29.435 CC lib/iscsi/iscsi.o 00:02:29.435 CC lib/vhost/vhost_scsi.o 00:02:29.435 CC lib/iscsi/param.o 00:02:29.435 CC lib/vhost/vhost_blk.o 00:02:29.435 CC lib/iscsi/portal_grp.o 00:02:29.435 CC lib/vhost/rte_vhost_user.o 00:02:29.435 CC lib/iscsi/tgt_node.o 00:02:29.435 CC lib/iscsi/iscsi_subsystem.o 00:02:29.435 CC lib/iscsi/iscsi_rpc.o 00:02:29.435 CC lib/iscsi/task.o 00:02:29.435 SO libspdk_ftl.so.9.0 00:02:29.696 SYMLINK libspdk_ftl.so 00:02:30.269 LIB libspdk_nvmf.a 00:02:30.269 SO libspdk_nvmf.so.19.0 00:02:30.269 SYMLINK libspdk_nvmf.so 00:02:30.530 LIB libspdk_vhost.a 00:02:30.530 SO libspdk_vhost.so.8.0 00:02:30.530 SYMLINK libspdk_vhost.so 00:02:30.791 LIB libspdk_iscsi.a 00:02:30.791 SO libspdk_iscsi.so.8.0 00:02:30.791 SYMLINK libspdk_iscsi.so 00:02:31.364 CC module/vfu_device/vfu_virtio.o 00:02:31.364 CC module/vfu_device/vfu_virtio_blk.o 00:02:31.364 CC module/env_dpdk/env_dpdk_rpc.o 00:02:31.364 CC module/vfu_device/vfu_virtio_scsi.o 00:02:31.364 CC module/vfu_device/vfu_virtio_rpc.o 00:02:31.364 CC module/vfu_device/vfu_virtio_fs.o 00:02:31.625 CC module/sock/posix/posix.o 00:02:31.625 LIB libspdk_env_dpdk_rpc.a 00:02:31.625 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.625 CC module/accel/ioat/accel_ioat.o 00:02:31.625 CC module/blob/bdev/blob_bdev.o 00:02:31.625 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.625 CC module/keyring/file/keyring.o 00:02:31.625 CC module/accel/dsa/accel_dsa.o 00:02:31.625 CC module/keyring/file/keyring_rpc.o 00:02:31.625 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.625 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.625 CC module/fsdev/aio/fsdev_aio.o 00:02:31.625 CC module/accel/error/accel_error.o 00:02:31.625 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:31.625 CC module/keyring/linux/keyring.o 00:02:31.625 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.625 CC module/accel/error/accel_error_rpc.o 00:02:31.625 CC module/fsdev/aio/linux_aio_mgr.o 00:02:31.625 CC module/keyring/linux/keyring_rpc.o 00:02:31.625 CC module/accel/iaa/accel_iaa.o 00:02:31.625 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.625 SO libspdk_env_dpdk_rpc.so.6.0 00:02:31.887 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.887 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.887 LIB libspdk_scheduler_gscheduler.a 00:02:31.887 LIB libspdk_keyring_linux.a 00:02:31.887 LIB libspdk_keyring_file.a 00:02:31.887 LIB libspdk_accel_error.a 00:02:31.887 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:31.887 LIB libspdk_scheduler_dynamic.a 00:02:31.887 SO libspdk_scheduler_gscheduler.so.4.0 00:02:31.887 LIB libspdk_accel_ioat.a 00:02:31.887 SO libspdk_keyring_linux.so.1.0 00:02:31.887 SO libspdk_keyring_file.so.2.0 00:02:31.887 LIB libspdk_accel_iaa.a 00:02:31.887 SO libspdk_accel_error.so.2.0 00:02:31.887 LIB libspdk_blob_bdev.a 00:02:31.887 SO libspdk_accel_ioat.so.6.0 00:02:31.887 SO libspdk_scheduler_dynamic.so.4.0 00:02:31.887 SO libspdk_accel_iaa.so.3.0 00:02:31.887 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.887 SYMLINK libspdk_keyring_linux.so 00:02:31.887 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.887 SO libspdk_blob_bdev.so.11.0 00:02:32.149 SYMLINK libspdk_keyring_file.so 00:02:32.149 LIB libspdk_accel_dsa.a 00:02:32.149 SYMLINK libspdk_accel_error.so 00:02:32.149 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.149 SYMLINK libspdk_accel_ioat.so 00:02:32.149 SO libspdk_accel_dsa.so.5.0 00:02:32.149 SYMLINK libspdk_accel_iaa.so 00:02:32.149 SYMLINK libspdk_blob_bdev.so 00:02:32.149 LIB libspdk_vfu_device.a 00:02:32.149 SO libspdk_vfu_device.so.3.0 00:02:32.149 SYMLINK libspdk_accel_dsa.so 00:02:32.149 SYMLINK libspdk_vfu_device.so 00:02:32.411 LIB libspdk_fsdev_aio.a 00:02:32.411 SO libspdk_fsdev_aio.so.1.0 00:02:32.411 LIB libspdk_sock_posix.a 00:02:32.411 SO libspdk_sock_posix.so.6.0 00:02:32.411 SYMLINK libspdk_fsdev_aio.so 00:02:32.411 SYMLINK libspdk_sock_posix.so 00:02:32.671 CC module/bdev/delay/vbdev_delay.o 00:02:32.671 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:32.671 CC module/bdev/error/vbdev_error.o 00:02:32.671 CC module/bdev/raid/bdev_raid.o 00:02:32.671 CC module/bdev/gpt/gpt.o 00:02:32.671 CC module/bdev/raid/bdev_raid_rpc.o 00:02:32.671 CC module/bdev/raid/bdev_raid_sb.o 00:02:32.671 CC module/bdev/gpt/vbdev_gpt.o 00:02:32.671 CC module/bdev/error/vbdev_error_rpc.o 00:02:32.671 CC module/bdev/raid/raid0.o 00:02:32.671 CC module/bdev/raid/raid1.o 00:02:32.671 CC module/bdev/raid/concat.o 00:02:32.671 CC module/bdev/lvol/vbdev_lvol.o 00:02:32.671 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:32.671 CC module/bdev/malloc/bdev_malloc.o 00:02:32.671 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.671 CC module/bdev/null/bdev_null.o 00:02:32.671 CC module/bdev/null/bdev_null_rpc.o 00:02:32.671 CC module/bdev/nvme/bdev_nvme.o 00:02:32.671 CC module/blobfs/bdev/blobfs_bdev.o 00:02:32.671 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:32.671 CC module/bdev/nvme/nvme_rpc.o 00:02:32.671 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:32.671 CC module/bdev/ftl/bdev_ftl.o 00:02:32.671 CC module/bdev/aio/bdev_aio.o 00:02:32.671 CC module/bdev/aio/bdev_aio_rpc.o 00:02:32.671 CC module/bdev/nvme/bdev_mdns_client.o 00:02:32.671 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:32.671 CC module/bdev/nvme/vbdev_opal.o 00:02:32.671 CC module/bdev/split/vbdev_split.o 00:02:32.671 CC module/bdev/passthru/vbdev_passthru.o 00:02:32.671 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:32.671 CC module/bdev/split/vbdev_split_rpc.o 00:02:32.671 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:32.671 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:32.671 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:32.671 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:32.671 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:32.671 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:32.671 CC module/bdev/iscsi/bdev_iscsi.o 00:02:32.671 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:32.671 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:32.932 LIB libspdk_blobfs_bdev.a 00:02:32.932 SO libspdk_blobfs_bdev.so.6.0 00:02:32.932 LIB libspdk_bdev_split.a 00:02:32.932 LIB libspdk_bdev_null.a 00:02:32.932 SYMLINK libspdk_blobfs_bdev.so 00:02:32.932 SO libspdk_bdev_split.so.6.0 00:02:32.932 SO libspdk_bdev_null.so.6.0 00:02:32.932 LIB libspdk_bdev_aio.a 00:02:33.193 LIB libspdk_bdev_error.a 00:02:33.193 LIB libspdk_bdev_gpt.a 00:02:33.193 SYMLINK libspdk_bdev_split.so 00:02:33.193 LIB libspdk_bdev_iscsi.a 00:02:33.193 SO libspdk_bdev_aio.so.6.0 00:02:33.193 SO libspdk_bdev_gpt.so.6.0 00:02:33.193 LIB libspdk_bdev_ftl.a 00:02:33.193 LIB libspdk_bdev_passthru.a 00:02:33.193 SO libspdk_bdev_error.so.6.0 00:02:33.193 SYMLINK libspdk_bdev_null.so 00:02:33.193 LIB libspdk_bdev_zone_block.a 00:02:33.193 LIB libspdk_bdev_malloc.a 00:02:33.193 SO libspdk_bdev_iscsi.so.6.0 00:02:33.193 SO libspdk_bdev_ftl.so.6.0 00:02:33.193 LIB libspdk_bdev_delay.a 00:02:33.193 SO libspdk_bdev_passthru.so.6.0 00:02:33.193 SO libspdk_bdev_zone_block.so.6.0 00:02:33.193 SYMLINK libspdk_bdev_aio.so 00:02:33.193 SYMLINK libspdk_bdev_error.so 00:02:33.193 SYMLINK libspdk_bdev_gpt.so 00:02:33.193 SO libspdk_bdev_malloc.so.6.0 00:02:33.193 SO libspdk_bdev_delay.so.6.0 00:02:33.193 SYMLINK libspdk_bdev_ftl.so 00:02:33.193 LIB libspdk_bdev_lvol.a 00:02:33.193 SYMLINK libspdk_bdev_iscsi.so 00:02:33.193 LIB libspdk_bdev_virtio.a 00:02:33.193 SYMLINK libspdk_bdev_passthru.so 00:02:33.193 SYMLINK libspdk_bdev_malloc.so 00:02:33.193 SYMLINK libspdk_bdev_zone_block.so 00:02:33.193 SO libspdk_bdev_lvol.so.6.0 00:02:33.193 SO libspdk_bdev_virtio.so.6.0 00:02:33.193 SYMLINK libspdk_bdev_delay.so 00:02:33.453 SYMLINK libspdk_bdev_lvol.so 00:02:33.453 SYMLINK libspdk_bdev_virtio.so 00:02:33.713 LIB libspdk_bdev_raid.a 00:02:33.713 SO libspdk_bdev_raid.so.6.0 00:02:33.974 SYMLINK libspdk_bdev_raid.so 00:02:34.915 LIB libspdk_bdev_nvme.a 00:02:34.915 SO libspdk_bdev_nvme.so.7.0 00:02:34.915 SYMLINK libspdk_bdev_nvme.so 00:02:35.858 CC module/event/subsystems/iobuf/iobuf.o 00:02:35.858 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:35.858 CC module/event/subsystems/sock/sock.o 00:02:35.858 CC module/event/subsystems/vmd/vmd.o 00:02:35.858 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:35.858 CC module/event/subsystems/keyring/keyring.o 00:02:35.858 CC module/event/subsystems/scheduler/scheduler.o 00:02:35.858 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:35.858 CC module/event/subsystems/fsdev/fsdev.o 00:02:35.858 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:35.858 LIB libspdk_event_fsdev.a 00:02:35.858 LIB libspdk_event_scheduler.a 00:02:35.858 LIB libspdk_event_keyring.a 00:02:35.858 LIB libspdk_event_vmd.a 00:02:35.858 LIB libspdk_event_sock.a 00:02:35.858 LIB libspdk_event_iobuf.a 00:02:35.858 LIB libspdk_event_vhost_blk.a 00:02:35.858 LIB libspdk_event_vfu_tgt.a 00:02:35.858 SO libspdk_event_fsdev.so.1.0 00:02:35.858 SO libspdk_event_scheduler.so.4.0 00:02:35.858 SO libspdk_event_keyring.so.1.0 00:02:35.858 SO libspdk_event_vmd.so.6.0 00:02:35.858 SO libspdk_event_sock.so.5.0 00:02:35.858 SO libspdk_event_iobuf.so.3.0 00:02:35.858 SO libspdk_event_vhost_blk.so.3.0 00:02:35.858 SO libspdk_event_vfu_tgt.so.3.0 00:02:36.120 SYMLINK libspdk_event_fsdev.so 00:02:36.120 SYMLINK libspdk_event_keyring.so 00:02:36.120 SYMLINK libspdk_event_scheduler.so 00:02:36.120 SYMLINK libspdk_event_sock.so 00:02:36.120 SYMLINK libspdk_event_vfu_tgt.so 00:02:36.120 SYMLINK libspdk_event_vhost_blk.so 00:02:36.120 SYMLINK libspdk_event_vmd.so 00:02:36.120 SYMLINK libspdk_event_iobuf.so 00:02:36.382 CC module/event/subsystems/accel/accel.o 00:02:36.643 LIB libspdk_event_accel.a 00:02:36.643 SO libspdk_event_accel.so.6.0 00:02:36.643 SYMLINK libspdk_event_accel.so 00:02:36.904 CC module/event/subsystems/bdev/bdev.o 00:02:37.166 LIB libspdk_event_bdev.a 00:02:37.166 SO libspdk_event_bdev.so.6.0 00:02:37.166 SYMLINK libspdk_event_bdev.so 00:02:37.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.739 CC module/event/subsystems/ublk/ublk.o 00:02:37.739 CC module/event/subsystems/scsi/scsi.o 00:02:37.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.739 CC module/event/subsystems/nbd/nbd.o 00:02:37.739 LIB libspdk_event_scsi.a 00:02:37.739 LIB libspdk_event_ublk.a 00:02:37.739 LIB libspdk_event_nbd.a 00:02:37.739 SO libspdk_event_scsi.so.6.0 00:02:37.739 SO libspdk_event_ublk.so.3.0 00:02:37.739 SO libspdk_event_nbd.so.6.0 00:02:38.000 LIB libspdk_event_nvmf.a 00:02:38.000 SYMLINK libspdk_event_ublk.so 00:02:38.000 SYMLINK libspdk_event_scsi.so 00:02:38.000 SO libspdk_event_nvmf.so.6.0 00:02:38.000 SYMLINK libspdk_event_nbd.so 00:02:38.000 SYMLINK libspdk_event_nvmf.so 00:02:38.262 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.262 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.523 LIB libspdk_event_vhost_scsi.a 00:02:38.523 LIB libspdk_event_iscsi.a 00:02:38.523 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.523 SO libspdk_event_iscsi.so.6.0 00:02:38.523 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.523 SYMLINK libspdk_event_iscsi.so 00:02:38.784 SO libspdk.so.6.0 00:02:38.784 SYMLINK libspdk.so 00:02:39.357 CXX app/trace/trace.o 00:02:39.357 CC app/spdk_nvme_perf/perf.o 00:02:39.357 CC app/trace_record/trace_record.o 00:02:39.357 TEST_HEADER include/spdk/accel.h 00:02:39.357 TEST_HEADER include/spdk/accel_module.h 00:02:39.357 TEST_HEADER include/spdk/assert.h 00:02:39.357 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.357 TEST_HEADER include/spdk/barrier.h 00:02:39.357 TEST_HEADER include/spdk/base64.h 00:02:39.357 TEST_HEADER include/spdk/bdev.h 00:02:39.357 TEST_HEADER include/spdk/bdev_module.h 00:02:39.357 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.357 TEST_HEADER include/spdk/bit_array.h 00:02:39.357 TEST_HEADER include/spdk/bit_pool.h 00:02:39.357 CC app/spdk_lspci/spdk_lspci.o 00:02:39.357 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.357 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.357 CC app/spdk_nvme_identify/identify.o 00:02:39.357 CC test/rpc_client/rpc_client_test.o 00:02:39.357 TEST_HEADER include/spdk/blobfs.h 00:02:39.357 CC app/spdk_top/spdk_top.o 00:02:39.357 TEST_HEADER include/spdk/blob.h 00:02:39.357 TEST_HEADER include/spdk/conf.h 00:02:39.357 TEST_HEADER include/spdk/config.h 00:02:39.357 TEST_HEADER include/spdk/crc16.h 00:02:39.357 TEST_HEADER include/spdk/cpuset.h 00:02:39.357 TEST_HEADER include/spdk/crc32.h 00:02:39.357 TEST_HEADER include/spdk/crc64.h 00:02:39.357 TEST_HEADER include/spdk/dma.h 00:02:39.357 TEST_HEADER include/spdk/dif.h 00:02:39.357 TEST_HEADER include/spdk/endian.h 00:02:39.357 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.357 TEST_HEADER include/spdk/env.h 00:02:39.357 TEST_HEADER include/spdk/event.h 00:02:39.357 TEST_HEADER include/spdk/fd_group.h 00:02:39.357 TEST_HEADER include/spdk/fd.h 00:02:39.357 TEST_HEADER include/spdk/file.h 00:02:39.357 TEST_HEADER include/spdk/fsdev.h 00:02:39.357 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.357 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:39.357 TEST_HEADER include/spdk/ftl.h 00:02:39.357 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.357 TEST_HEADER include/spdk/hexlify.h 00:02:39.357 TEST_HEADER include/spdk/histogram_data.h 00:02:39.357 CC app/spdk_dd/spdk_dd.o 00:02:39.357 TEST_HEADER include/spdk/idxd.h 00:02:39.357 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.357 TEST_HEADER include/spdk/init.h 00:02:39.357 TEST_HEADER include/spdk/ioat.h 00:02:39.357 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.357 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.357 TEST_HEADER include/spdk/json.h 00:02:39.357 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.357 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.357 TEST_HEADER include/spdk/keyring_module.h 00:02:39.357 TEST_HEADER include/spdk/likely.h 00:02:39.357 TEST_HEADER include/spdk/keyring.h 00:02:39.357 TEST_HEADER include/spdk/log.h 00:02:39.357 CC app/nvmf_tgt/nvmf_main.o 00:02:39.357 TEST_HEADER include/spdk/md5.h 00:02:39.357 TEST_HEADER include/spdk/lvol.h 00:02:39.357 TEST_HEADER include/spdk/mmio.h 00:02:39.357 TEST_HEADER include/spdk/memory.h 00:02:39.357 TEST_HEADER include/spdk/nbd.h 00:02:39.357 TEST_HEADER include/spdk/net.h 00:02:39.357 TEST_HEADER include/spdk/notify.h 00:02:39.357 TEST_HEADER include/spdk/nvme.h 00:02:39.357 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.357 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.357 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.357 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.357 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.357 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.357 TEST_HEADER include/spdk/nvmf.h 00:02:39.357 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.357 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.357 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.357 TEST_HEADER include/spdk/opal.h 00:02:39.357 TEST_HEADER include/spdk/opal_spec.h 00:02:39.357 TEST_HEADER include/spdk/pci_ids.h 00:02:39.357 TEST_HEADER include/spdk/queue.h 00:02:39.357 TEST_HEADER include/spdk/pipe.h 00:02:39.357 TEST_HEADER include/spdk/reduce.h 00:02:39.357 TEST_HEADER include/spdk/rpc.h 00:02:39.357 TEST_HEADER include/spdk/scsi.h 00:02:39.357 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.357 TEST_HEADER include/spdk/scheduler.h 00:02:39.357 TEST_HEADER include/spdk/sock.h 00:02:39.357 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.357 TEST_HEADER include/spdk/stdinc.h 00:02:39.357 TEST_HEADER include/spdk/string.h 00:02:39.357 TEST_HEADER include/spdk/thread.h 00:02:39.357 TEST_HEADER include/spdk/trace_parser.h 00:02:39.357 TEST_HEADER include/spdk/trace.h 00:02:39.357 TEST_HEADER include/spdk/tree.h 00:02:39.357 TEST_HEADER include/spdk/ublk.h 00:02:39.357 TEST_HEADER include/spdk/util.h 00:02:39.357 TEST_HEADER include/spdk/version.h 00:02:39.357 TEST_HEADER include/spdk/uuid.h 00:02:39.357 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.357 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.357 TEST_HEADER include/spdk/vhost.h 00:02:39.357 TEST_HEADER include/spdk/vmd.h 00:02:39.357 TEST_HEADER include/spdk/xor.h 00:02:39.357 TEST_HEADER include/spdk/zipf.h 00:02:39.357 CXX test/cpp_headers/accel.o 00:02:39.357 CC app/spdk_tgt/spdk_tgt.o 00:02:39.358 CXX test/cpp_headers/accel_module.o 00:02:39.358 CXX test/cpp_headers/assert.o 00:02:39.358 CXX test/cpp_headers/barrier.o 00:02:39.358 CXX test/cpp_headers/base64.o 00:02:39.358 CXX test/cpp_headers/bdev.o 00:02:39.358 CXX test/cpp_headers/bdev_module.o 00:02:39.358 CXX test/cpp_headers/bdev_zone.o 00:02:39.358 CXX test/cpp_headers/bit_pool.o 00:02:39.358 CXX test/cpp_headers/blob_bdev.o 00:02:39.358 CXX test/cpp_headers/blobfs.o 00:02:39.358 CXX test/cpp_headers/bit_array.o 00:02:39.358 CXX test/cpp_headers/blob.o 00:02:39.358 CXX test/cpp_headers/conf.o 00:02:39.358 CXX test/cpp_headers/config.o 00:02:39.358 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.358 CXX test/cpp_headers/crc16.o 00:02:39.358 CXX test/cpp_headers/cpuset.o 00:02:39.358 CXX test/cpp_headers/crc64.o 00:02:39.358 CXX test/cpp_headers/dif.o 00:02:39.358 CXX test/cpp_headers/crc32.o 00:02:39.358 CXX test/cpp_headers/dma.o 00:02:39.358 CXX test/cpp_headers/env.o 00:02:39.358 CXX test/cpp_headers/endian.o 00:02:39.358 CXX test/cpp_headers/fd_group.o 00:02:39.358 CXX test/cpp_headers/fsdev.o 00:02:39.358 CXX test/cpp_headers/event.o 00:02:39.358 CXX test/cpp_headers/fsdev_module.o 00:02:39.358 CXX test/cpp_headers/env_dpdk.o 00:02:39.358 CXX test/cpp_headers/fd.o 00:02:39.358 CXX test/cpp_headers/file.o 00:02:39.358 CXX test/cpp_headers/histogram_data.o 00:02:39.358 CXX test/cpp_headers/idxd.o 00:02:39.358 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.358 CXX test/cpp_headers/idxd_spec.o 00:02:39.358 CXX test/cpp_headers/gpt_spec.o 00:02:39.358 CXX test/cpp_headers/ftl.o 00:02:39.358 CXX test/cpp_headers/init.o 00:02:39.358 CXX test/cpp_headers/hexlify.o 00:02:39.358 CXX test/cpp_headers/ioat.o 00:02:39.358 CXX test/cpp_headers/ioat_spec.o 00:02:39.358 CXX test/cpp_headers/json.o 00:02:39.358 CXX test/cpp_headers/iscsi_spec.o 00:02:39.358 CXX test/cpp_headers/keyring.o 00:02:39.358 CC examples/util/zipf/zipf.o 00:02:39.358 CXX test/cpp_headers/jsonrpc.o 00:02:39.358 CXX test/cpp_headers/keyring_module.o 00:02:39.358 CXX test/cpp_headers/log.o 00:02:39.358 CXX test/cpp_headers/lvol.o 00:02:39.358 CXX test/cpp_headers/md5.o 00:02:39.358 CC test/thread/poller_perf/poller_perf.o 00:02:39.358 CXX test/cpp_headers/mmio.o 00:02:39.358 CXX test/cpp_headers/memory.o 00:02:39.358 CXX test/cpp_headers/notify.o 00:02:39.358 CXX test/cpp_headers/nbd.o 00:02:39.358 CXX test/cpp_headers/net.o 00:02:39.358 LINK spdk_lspci 00:02:39.358 CC examples/ioat/perf/perf.o 00:02:39.358 CC test/env/vtophys/vtophys.o 00:02:39.358 CXX test/cpp_headers/likely.o 00:02:39.618 CXX test/cpp_headers/nvme_intel.o 00:02:39.618 CC examples/ioat/verify/verify.o 00:02:39.618 CXX test/cpp_headers/nvme.o 00:02:39.618 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.618 CC test/app/bdev_svc/bdev_svc.o 00:02:39.618 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.618 CXX test/cpp_headers/nvme_spec.o 00:02:39.618 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.618 CXX test/cpp_headers/nvmf.o 00:02:39.618 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.618 CXX test/cpp_headers/nvmf_spec.o 00:02:39.618 LINK spdk_nvme_discover 00:02:39.618 CXX test/cpp_headers/nvme_zns.o 00:02:39.618 CC test/app/jsoncat/jsoncat.o 00:02:39.618 CXX test/cpp_headers/nvmf_transport.o 00:02:39.618 CXX test/cpp_headers/pci_ids.o 00:02:39.618 CXX test/cpp_headers/opal.o 00:02:39.618 CXX test/cpp_headers/queue.o 00:02:39.618 CXX test/cpp_headers/opal_spec.o 00:02:39.618 CXX test/cpp_headers/scsi.o 00:02:39.618 CXX test/cpp_headers/scsi_spec.o 00:02:39.618 CXX test/cpp_headers/reduce.o 00:02:39.618 CXX test/cpp_headers/rpc.o 00:02:39.618 CXX test/cpp_headers/pipe.o 00:02:39.618 CXX test/cpp_headers/scheduler.o 00:02:39.618 CXX test/cpp_headers/stdinc.o 00:02:39.618 CC test/dma/test_dma/test_dma.o 00:02:39.618 CXX test/cpp_headers/trace_parser.o 00:02:39.618 CC test/app/stub/stub.o 00:02:39.879 CC test/env/pci/pci_ut.o 00:02:39.879 CXX test/cpp_headers/sock.o 00:02:39.879 CXX test/cpp_headers/trace.o 00:02:39.879 CXX test/cpp_headers/ublk.o 00:02:39.879 LINK spdk_tgt 00:02:39.879 CC app/fio/bdev/fio_plugin.o 00:02:39.879 CXX test/cpp_headers/string.o 00:02:39.879 CXX test/cpp_headers/thread.o 00:02:39.879 CXX test/cpp_headers/tree.o 00:02:39.879 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.879 LINK spdk_trace_record 00:02:39.879 CXX test/cpp_headers/util.o 00:02:39.879 CXX test/cpp_headers/vhost.o 00:02:39.879 CXX test/cpp_headers/version.o 00:02:39.879 CXX test/cpp_headers/vmd.o 00:02:39.879 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.879 CXX test/cpp_headers/uuid.o 00:02:39.879 CXX test/cpp_headers/zipf.o 00:02:39.879 CC test/env/memory/memory_ut.o 00:02:39.879 CXX test/cpp_headers/xor.o 00:02:39.879 CC app/fio/nvme/fio_plugin.o 00:02:39.879 CC test/app/histogram_perf/histogram_perf.o 00:02:39.879 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.139 LINK vtophys 00:02:40.139 LINK bdev_svc 00:02:40.139 LINK poller_perf 00:02:40.139 LINK zipf 00:02:40.139 LINK spdk_dd 00:02:40.139 LINK jsoncat 00:02:40.139 LINK spdk_trace 00:02:40.398 LINK histogram_perf 00:02:40.398 LINK env_dpdk_post_init 00:02:40.398 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.398 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.398 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:40.398 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:40.398 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.398 LINK nvmf_tgt 00:02:40.398 LINK spdk_nvme_identify 00:02:40.657 LINK iscsi_tgt 00:02:40.657 LINK rpc_client_test 00:02:40.657 LINK pci_ut 00:02:40.657 LINK interrupt_tgt 00:02:40.657 LINK spdk_bdev 00:02:40.657 CC test/event/reactor/reactor.o 00:02:40.657 LINK spdk_top 00:02:40.657 CC test/event/reactor_perf/reactor_perf.o 00:02:40.657 CC test/event/event_perf/event_perf.o 00:02:40.657 LINK stub 00:02:40.657 CC test/event/app_repeat/app_repeat.o 00:02:40.657 CC app/vhost/vhost.o 00:02:40.657 CC test/event/scheduler/scheduler.o 00:02:40.657 CC examples/sock/hello_world/hello_sock.o 00:02:40.657 LINK verify 00:02:40.657 CC examples/idxd/perf/perf.o 00:02:40.657 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.657 CC examples/thread/thread/thread_ex.o 00:02:40.917 CC examples/vmd/led/led.o 00:02:40.917 LINK ioat_perf 00:02:40.917 LINK nvme_fuzz 00:02:40.917 LINK spdk_nvme 00:02:40.917 LINK vhost_fuzz 00:02:40.917 LINK reactor 00:02:40.917 LINK event_perf 00:02:40.917 LINK reactor_perf 00:02:40.917 LINK app_repeat 00:02:40.917 LINK lsvmd 00:02:40.917 LINK vhost 00:02:40.917 LINK led 00:02:40.917 LINK scheduler 00:02:40.917 LINK mem_callbacks 00:02:41.178 LINK hello_sock 00:02:41.178 LINK thread 00:02:41.178 LINK idxd_perf 00:02:41.178 LINK memory_ut 00:02:41.438 LINK test_dma 00:02:41.438 LINK spdk_nvme_perf 00:02:41.698 CC examples/nvme/arbitration/arbitration.o 00:02:41.698 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:41.698 CC examples/nvme/hello_world/hello_world.o 00:02:41.698 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:41.698 CC examples/nvme/reconnect/reconnect.o 00:02:41.698 CC examples/nvme/abort/abort.o 00:02:41.698 CC examples/nvme/hotplug/hotplug.o 00:02:41.699 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:41.699 CC examples/accel/perf/accel_perf.o 00:02:41.699 CC examples/blob/cli/blobcli.o 00:02:41.699 CC examples/blob/hello_world/hello_blob.o 00:02:41.699 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:41.957 LINK cmb_copy 00:02:41.957 LINK pmr_persistence 00:02:41.957 LINK hello_world 00:02:41.957 LINK hotplug 00:02:41.957 CC test/nvme/simple_copy/simple_copy.o 00:02:41.957 CC test/nvme/compliance/nvme_compliance.o 00:02:41.957 CC test/nvme/startup/startup.o 00:02:41.957 CC test/nvme/overhead/overhead.o 00:02:41.957 CC test/nvme/connect_stress/connect_stress.o 00:02:41.957 CC test/nvme/reset/reset.o 00:02:41.957 CC test/nvme/e2edp/nvme_dp.o 00:02:41.957 CC test/nvme/aer/aer.o 00:02:41.957 CC test/nvme/fdp/fdp.o 00:02:41.957 CC test/nvme/boot_partition/boot_partition.o 00:02:41.957 CC test/nvme/sgl/sgl.o 00:02:41.957 CC test/nvme/err_injection/err_injection.o 00:02:41.957 CC test/nvme/reserve/reserve.o 00:02:41.957 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.957 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.957 CC test/nvme/cuse/cuse.o 00:02:41.957 LINK arbitration 00:02:41.957 CC test/accel/dif/dif.o 00:02:41.957 LINK reconnect 00:02:41.958 CC test/blobfs/mkfs/mkfs.o 00:02:41.958 LINK abort 00:02:41.958 LINK hello_blob 00:02:42.219 LINK iscsi_fuzz 00:02:42.219 LINK hello_fsdev 00:02:42.219 CC test/lvol/esnap/esnap.o 00:02:42.219 LINK nvme_manage 00:02:42.219 LINK startup 00:02:42.219 LINK connect_stress 00:02:42.219 LINK boot_partition 00:02:42.219 LINK err_injection 00:02:42.219 LINK simple_copy 00:02:42.219 LINK reserve 00:02:42.219 LINK doorbell_aers 00:02:42.219 LINK fused_ordering 00:02:42.219 LINK accel_perf 00:02:42.219 LINK blobcli 00:02:42.219 LINK sgl 00:02:42.219 LINK aer 00:02:42.219 LINK reset 00:02:42.219 LINK mkfs 00:02:42.219 LINK overhead 00:02:42.219 LINK nvme_dp 00:02:42.219 LINK nvme_compliance 00:02:42.480 LINK fdp 00:02:42.740 LINK dif 00:02:42.740 CC examples/bdev/hello_world/hello_bdev.o 00:02:42.740 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.311 LINK hello_bdev 00:02:43.311 LINK cuse 00:02:43.311 CC test/bdev/bdevio/bdevio.o 00:02:43.572 LINK bdevperf 00:02:43.572 LINK bdevio 00:02:44.143 CC examples/nvmf/nvmf/nvmf.o 00:02:44.714 LINK nvmf 00:02:46.627 LINK esnap 00:02:46.888 00:02:46.888 real 0m55.260s 00:02:46.888 user 6m27.195s 00:02:46.888 sys 3m57.686s 00:02:46.888 21:51:05 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.888 21:51:05 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.888 ************************************ 00:02:46.888 END TEST make 00:02:46.888 ************************************ 00:02:46.888 21:51:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.888 21:51:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.888 21:51:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.888 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.888 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.888 21:51:05 -- pm/common@44 -- $ pid=3145718 00:02:46.888 21:51:05 -- pm/common@50 -- $ kill -TERM 3145718 00:02:46.888 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.888 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.888 21:51:05 -- pm/common@44 -- $ pid=3145720 00:02:46.888 21:51:05 -- pm/common@50 -- $ kill -TERM 3145720 00:02:46.888 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.888 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.888 21:51:05 -- pm/common@44 -- $ pid=3145721 00:02:46.888 21:51:05 -- pm/common@50 -- $ kill -TERM 3145721 00:02:46.888 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.888 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.888 21:51:05 -- pm/common@44 -- $ pid=3145746 00:02:46.888 21:51:05 -- pm/common@50 -- $ sudo -E kill -TERM 3145746 00:02:47.149 21:51:05 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:47.149 21:51:05 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:47.149 21:51:05 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:47.149 21:51:05 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:47.149 21:51:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:47.149 21:51:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:47.149 21:51:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:47.149 21:51:05 -- scripts/common.sh@336 -- # IFS=.-: 00:02:47.149 21:51:05 -- scripts/common.sh@336 -- # read -ra ver1 00:02:47.149 21:51:05 -- scripts/common.sh@337 -- # IFS=.-: 00:02:47.149 21:51:05 -- scripts/common.sh@337 -- # read -ra ver2 00:02:47.149 21:51:05 -- scripts/common.sh@338 -- # local 'op=<' 00:02:47.149 21:51:05 -- scripts/common.sh@340 -- # ver1_l=2 00:02:47.149 21:51:05 -- scripts/common.sh@341 -- # ver2_l=1 00:02:47.149 21:51:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:47.149 21:51:05 -- scripts/common.sh@344 -- # case "$op" in 00:02:47.149 21:51:05 -- scripts/common.sh@345 -- # : 1 00:02:47.149 21:51:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:47.149 21:51:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:47.149 21:51:05 -- scripts/common.sh@365 -- # decimal 1 00:02:47.149 21:51:05 -- scripts/common.sh@353 -- # local d=1 00:02:47.149 21:51:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:47.149 21:51:05 -- scripts/common.sh@355 -- # echo 1 00:02:47.149 21:51:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:47.149 21:51:05 -- scripts/common.sh@366 -- # decimal 2 00:02:47.149 21:51:05 -- scripts/common.sh@353 -- # local d=2 00:02:47.149 21:51:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:47.149 21:51:05 -- scripts/common.sh@355 -- # echo 2 00:02:47.149 21:51:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:47.149 21:51:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:47.149 21:51:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:47.149 21:51:05 -- scripts/common.sh@368 -- # return 0 00:02:47.149 21:51:05 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:47.149 21:51:05 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.149 --rc genhtml_branch_coverage=1 00:02:47.149 --rc genhtml_function_coverage=1 00:02:47.149 --rc genhtml_legend=1 00:02:47.149 --rc geninfo_all_blocks=1 00:02:47.149 --rc geninfo_unexecuted_blocks=1 00:02:47.149 00:02:47.149 ' 00:02:47.149 21:51:05 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.149 --rc genhtml_branch_coverage=1 00:02:47.149 --rc genhtml_function_coverage=1 00:02:47.149 --rc genhtml_legend=1 00:02:47.149 --rc geninfo_all_blocks=1 00:02:47.149 --rc geninfo_unexecuted_blocks=1 00:02:47.149 00:02:47.149 ' 00:02:47.149 21:51:05 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.149 --rc genhtml_branch_coverage=1 00:02:47.149 --rc genhtml_function_coverage=1 00:02:47.149 --rc genhtml_legend=1 00:02:47.149 --rc geninfo_all_blocks=1 00:02:47.149 --rc geninfo_unexecuted_blocks=1 00:02:47.149 00:02:47.149 ' 00:02:47.149 21:51:05 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:47.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:47.150 --rc genhtml_branch_coverage=1 00:02:47.150 --rc genhtml_function_coverage=1 00:02:47.150 --rc genhtml_legend=1 00:02:47.150 --rc geninfo_all_blocks=1 00:02:47.150 --rc geninfo_unexecuted_blocks=1 00:02:47.150 00:02:47.150 ' 00:02:47.150 21:51:05 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.150 21:51:05 -- nvmf/common.sh@7 -- # uname -s 00:02:47.150 21:51:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.150 21:51:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.150 21:51:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.150 21:51:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.150 21:51:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.150 21:51:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.150 21:51:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.150 21:51:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.150 21:51:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.150 21:51:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.150 21:51:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:47.150 21:51:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:47.150 21:51:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.150 21:51:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.150 21:51:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.150 21:51:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.150 21:51:05 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:47.150 21:51:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:47.150 21:51:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.150 21:51:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.150 21:51:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.150 21:51:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.150 21:51:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.150 21:51:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.150 21:51:05 -- paths/export.sh@5 -- # export PATH 00:02:47.150 21:51:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.150 21:51:05 -- nvmf/common.sh@51 -- # : 0 00:02:47.150 21:51:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:47.150 21:51:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:47.150 21:51:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.150 21:51:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.150 21:51:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.150 21:51:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:47.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:47.150 21:51:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:47.150 21:51:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:47.150 21:51:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:47.150 21:51:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.150 21:51:05 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.150 21:51:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.150 21:51:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.150 21:51:05 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.150 21:51:05 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.150 21:51:05 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:47.150 21:51:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.150 21:51:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.150 21:51:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.150 21:51:05 -- spdk/autotest.sh@48 -- # udevadm_pid=3227671 00:02:47.150 21:51:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.150 21:51:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.150 21:51:05 -- pm/common@17 -- # local monitor 00:02:47.150 21:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.150 21:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.150 21:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.150 21:51:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.150 21:51:05 -- pm/common@21 -- # date +%s 00:02:47.150 21:51:05 -- pm/common@25 -- # sleep 1 00:02:47.150 21:51:05 -- pm/common@21 -- # date +%s 00:02:47.150 21:51:05 -- pm/common@21 -- # date +%s 00:02:47.150 21:51:05 -- pm/common@21 -- # date +%s 00:02:47.150 21:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728762665 00:02:47.150 21:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728762665 00:02:47.150 21:51:05 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728762665 00:02:47.150 21:51:05 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728762665 00:02:47.150 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728762665_collect-vmstat.pm.log 00:02:47.150 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728762665_collect-cpu-load.pm.log 00:02:47.150 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728762665_collect-cpu-temp.pm.log 00:02:47.412 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728762665_collect-bmc-pm.bmc.pm.log 00:02:48.356 21:51:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.356 21:51:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.356 21:51:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:48.356 21:51:06 -- common/autotest_common.sh@10 -- # set +x 00:02:48.356 21:51:06 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.356 21:51:06 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:48.356 21:51:06 -- common/autotest_common.sh@10 -- # set +x 00:02:48.356 21:51:06 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:48.356 21:51:06 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.356 21:51:06 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.356 21:51:06 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:48.356 21:51:06 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:48.356 21:51:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.356 21:51:06 -- common/autotest_common.sh@1455 -- # uname 00:02:48.356 21:51:06 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.356 21:51:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.356 21:51:06 -- common/autotest_common.sh@1475 -- # uname 00:02:48.356 21:51:06 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.356 21:51:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.356 21:51:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:48.356 lcov: LCOV version 1.15 00:02:48.356 21:51:06 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:10.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.462 21:51:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:18.462 21:51:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:18.462 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:03:18.462 21:51:36 -- spdk/autotest.sh@78 -- # rm -f 00:03:18.462 21:51:36 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.673 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.673 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:22.674 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:22.674 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:22.674 21:51:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:22.674 21:51:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:22.674 21:51:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:22.674 21:51:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:22.674 21:51:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:22.674 21:51:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:22.674 21:51:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:22.674 21:51:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.674 21:51:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:22.674 21:51:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:22.674 21:51:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:22.674 21:51:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:22.674 21:51:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:22.674 21:51:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:22.674 21:51:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:22.674 No valid GPT data, bailing 00:03:22.674 21:51:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.674 21:51:41 -- scripts/common.sh@394 -- # pt= 00:03:22.674 21:51:41 -- scripts/common.sh@395 -- # return 1 00:03:22.674 21:51:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.674 1+0 records in 00:03:22.674 1+0 records out 00:03:22.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047193 s, 222 MB/s 00:03:22.674 21:51:41 -- spdk/autotest.sh@105 -- # sync 00:03:22.674 21:51:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.674 21:51:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.674 21:51:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:32.690 21:51:49 -- spdk/autotest.sh@111 -- # uname -s 00:03:32.690 21:51:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:32.690 21:51:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:32.690 21:51:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:34.607 Hugepages 00:03:34.607 node hugesize free / total 00:03:34.607 node0 1048576kB 0 / 0 00:03:34.607 node0 2048kB 0 / 0 00:03:34.607 node1 1048576kB 0 / 0 00:03:34.607 node1 2048kB 0 / 0 00:03:34.607 00:03:34.607 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.607 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:34.607 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:34.607 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:34.607 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:34.607 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:34.607 21:51:53 -- spdk/autotest.sh@117 -- # uname -s 00:03:34.607 21:51:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:34.607 21:51:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:34.607 21:51:53 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.819 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:38.819 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:40.203 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:40.463 21:51:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:41.406 21:51:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:41.406 21:51:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:41.406 21:51:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:41.406 21:51:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:41.406 21:51:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:41.406 21:51:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:41.406 21:51:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.406 21:51:59 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:41.406 21:51:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:41.406 21:51:59 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:41.406 21:51:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:41.406 21:51:59 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.616 Waiting for block devices as requested 00:03:45.616 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:45.616 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:45.877 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:45.877 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:45.877 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:46.139 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:46.139 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:46.139 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:46.400 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:46.400 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:46.661 21:52:05 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:46.661 21:52:05 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:46.661 21:52:05 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:46.661 21:52:05 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:46.661 21:52:05 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:46.662 21:52:05 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:46.662 21:52:05 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:46.662 21:52:05 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:46.662 21:52:05 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:46.662 21:52:05 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:46.662 21:52:05 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:46.662 21:52:05 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:46.662 21:52:05 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:46.662 21:52:05 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:46.662 21:52:05 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:46.662 21:52:05 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:46.662 21:52:05 -- common/autotest_common.sh@1541 -- # continue 00:03:46.662 21:52:05 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:46.662 21:52:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.662 21:52:05 -- common/autotest_common.sh@10 -- # set +x 00:03:46.662 21:52:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:46.662 21:52:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.662 21:52:05 -- common/autotest_common.sh@10 -- # set +x 00:03:46.662 21:52:05 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.875 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.875 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:50.875 21:52:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:50.875 21:52:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.875 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.875 21:52:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:50.875 21:52:09 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:50.875 21:52:09 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.875 21:52:09 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:50.875 21:52:09 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:50.875 21:52:09 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:50.875 21:52:09 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:50.875 21:52:09 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:50.875 21:52:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:50.875 21:52:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:50.875 21:52:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.875 21:52:09 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:50.875 21:52:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:50.875 21:52:09 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:50.875 21:52:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:50.875 21:52:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:50.875 21:52:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:50.875 21:52:09 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:50.875 21:52:09 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:50.875 21:52:09 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:50.875 21:52:09 -- common/autotest_common.sh@1570 -- # return 0 00:03:50.875 21:52:09 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:50.875 21:52:09 -- common/autotest_common.sh@1578 -- # return 0 00:03:50.875 21:52:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.875 21:52:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.875 21:52:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.875 21:52:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.875 21:52:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.875 21:52:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.875 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.875 21:52:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.875 21:52:09 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:50.875 21:52:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.875 21:52:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.875 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:03:50.875 ************************************ 00:03:50.875 START TEST env 00:03:50.875 ************************************ 00:03:50.875 21:52:09 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:51.137 * Looking for test storage... 00:03:51.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:51.137 21:52:09 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:51.137 21:52:09 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:51.137 21:52:09 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:51.137 21:52:09 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:51.137 21:52:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.137 21:52:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.137 21:52:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.137 21:52:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.137 21:52:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.137 21:52:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.137 21:52:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.137 21:52:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.137 21:52:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.137 21:52:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.137 21:52:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.137 21:52:09 env -- scripts/common.sh@344 -- # case "$op" in 00:03:51.137 21:52:09 env -- scripts/common.sh@345 -- # : 1 00:03:51.137 21:52:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.137 21:52:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.137 21:52:09 env -- scripts/common.sh@365 -- # decimal 1 00:03:51.137 21:52:09 env -- scripts/common.sh@353 -- # local d=1 00:03:51.137 21:52:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.137 21:52:09 env -- scripts/common.sh@355 -- # echo 1 00:03:51.137 21:52:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.137 21:52:09 env -- scripts/common.sh@366 -- # decimal 2 00:03:51.137 21:52:09 env -- scripts/common.sh@353 -- # local d=2 00:03:51.137 21:52:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.138 21:52:09 env -- scripts/common.sh@355 -- # echo 2 00:03:51.138 21:52:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.138 21:52:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.138 21:52:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.138 21:52:09 env -- scripts/common.sh@368 -- # return 0 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:51.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.138 --rc genhtml_branch_coverage=1 00:03:51.138 --rc genhtml_function_coverage=1 00:03:51.138 --rc genhtml_legend=1 00:03:51.138 --rc geninfo_all_blocks=1 00:03:51.138 --rc geninfo_unexecuted_blocks=1 00:03:51.138 00:03:51.138 ' 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:51.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.138 --rc genhtml_branch_coverage=1 00:03:51.138 --rc genhtml_function_coverage=1 00:03:51.138 --rc genhtml_legend=1 00:03:51.138 --rc geninfo_all_blocks=1 00:03:51.138 --rc geninfo_unexecuted_blocks=1 00:03:51.138 00:03:51.138 ' 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:51.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.138 --rc genhtml_branch_coverage=1 00:03:51.138 --rc genhtml_function_coverage=1 00:03:51.138 --rc genhtml_legend=1 00:03:51.138 --rc geninfo_all_blocks=1 00:03:51.138 --rc geninfo_unexecuted_blocks=1 00:03:51.138 00:03:51.138 ' 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:51.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.138 --rc genhtml_branch_coverage=1 00:03:51.138 --rc genhtml_function_coverage=1 00:03:51.138 --rc genhtml_legend=1 00:03:51.138 --rc geninfo_all_blocks=1 00:03:51.138 --rc geninfo_unexecuted_blocks=1 00:03:51.138 00:03:51.138 ' 00:03:51.138 21:52:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.138 21:52:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.138 21:52:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.138 ************************************ 00:03:51.138 START TEST env_memory 00:03:51.138 ************************************ 00:03:51.138 21:52:09 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:51.138 00:03:51.138 00:03:51.138 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.138 http://cunit.sourceforge.net/ 00:03:51.138 00:03:51.138 00:03:51.138 Suite: memory 00:03:51.138 Test: alloc and free memory map ...[2024-10-12 21:52:09.603182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:51.138 passed 00:03:51.400 Test: mem map translation ...[2024-10-12 21:52:09.628677] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:51.400 [2024-10-12 21:52:09.628704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:51.400 [2024-10-12 21:52:09.628750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:51.400 [2024-10-12 21:52:09.628759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:51.400 passed 00:03:51.400 Test: mem map registration ...[2024-10-12 21:52:09.683899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:51.400 [2024-10-12 21:52:09.683918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:51.400 passed 00:03:51.400 Test: mem map adjacent registrations ...passed 00:03:51.400 00:03:51.400 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.400 suites 1 1 n/a 0 0 00:03:51.400 tests 4 4 4 0 0 00:03:51.400 asserts 152 152 152 0 n/a 00:03:51.400 00:03:51.400 Elapsed time = 0.192 seconds 00:03:51.400 00:03:51.400 real 0m0.207s 00:03:51.400 user 0m0.193s 00:03:51.400 sys 0m0.014s 00:03:51.400 21:52:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.400 21:52:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:51.400 ************************************ 00:03:51.400 END TEST env_memory 00:03:51.400 ************************************ 00:03:51.400 21:52:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:51.400 21:52:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.400 21:52:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.400 21:52:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.400 ************************************ 00:03:51.400 START TEST env_vtophys 00:03:51.400 ************************************ 00:03:51.400 21:52:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:51.400 EAL: lib.eal log level changed from notice to debug 00:03:51.400 EAL: Detected lcore 0 as core 0 on socket 0 00:03:51.400 EAL: Detected lcore 1 as core 1 on socket 0 00:03:51.400 EAL: Detected lcore 2 as core 2 on socket 0 00:03:51.400 EAL: Detected lcore 3 as core 3 on socket 0 00:03:51.400 EAL: Detected lcore 4 as core 4 on socket 0 00:03:51.401 EAL: Detected lcore 5 as core 5 on socket 0 00:03:51.401 EAL: Detected lcore 6 as core 6 on socket 0 00:03:51.401 EAL: Detected lcore 7 as core 7 on socket 0 00:03:51.401 EAL: Detected lcore 8 as core 8 on socket 0 00:03:51.401 EAL: Detected lcore 9 as core 9 on socket 0 00:03:51.401 EAL: Detected lcore 10 as core 10 on socket 0 00:03:51.401 EAL: Detected lcore 11 as core 11 on socket 0 00:03:51.401 EAL: Detected lcore 12 as core 12 on socket 0 00:03:51.401 EAL: Detected lcore 13 as core 13 on socket 0 00:03:51.401 EAL: Detected lcore 14 as core 14 on socket 0 00:03:51.401 EAL: Detected lcore 15 as core 15 on socket 0 00:03:51.401 EAL: Detected lcore 16 as core 16 on socket 0 00:03:51.401 EAL: Detected lcore 17 as core 17 on socket 0 00:03:51.401 EAL: Detected lcore 18 as core 18 on socket 0 00:03:51.401 EAL: Detected lcore 19 as core 19 on socket 0 00:03:51.401 EAL: Detected lcore 20 as core 20 on socket 0 00:03:51.401 EAL: Detected lcore 21 as core 21 on socket 0 00:03:51.401 EAL: Detected lcore 22 as core 22 on socket 0 00:03:51.401 EAL: Detected lcore 23 as core 23 on socket 0 00:03:51.401 EAL: Detected lcore 24 as core 24 on socket 0 00:03:51.401 EAL: Detected lcore 25 as core 25 on socket 0 00:03:51.401 EAL: Detected lcore 26 as core 26 on socket 0 00:03:51.401 EAL: Detected lcore 27 as core 27 on socket 0 00:03:51.401 EAL: Detected lcore 28 as core 28 on socket 0 00:03:51.401 EAL: Detected lcore 29 as core 29 on socket 0 00:03:51.401 EAL: Detected lcore 30 as core 30 on socket 0 00:03:51.401 EAL: Detected lcore 31 as core 31 on socket 0 00:03:51.401 EAL: Detected lcore 32 as core 32 on socket 0 00:03:51.401 EAL: Detected lcore 33 as core 33 on socket 0 00:03:51.401 EAL: Detected lcore 34 as core 34 on socket 0 00:03:51.401 EAL: Detected lcore 35 as core 35 on socket 0 00:03:51.401 EAL: Detected lcore 36 as core 0 on socket 1 00:03:51.401 EAL: Detected lcore 37 as core 1 on socket 1 00:03:51.401 EAL: Detected lcore 38 as core 2 on socket 1 00:03:51.401 EAL: Detected lcore 39 as core 3 on socket 1 00:03:51.401 EAL: Detected lcore 40 as core 4 on socket 1 00:03:51.401 EAL: Detected lcore 41 as core 5 on socket 1 00:03:51.401 EAL: Detected lcore 42 as core 6 on socket 1 00:03:51.401 EAL: Detected lcore 43 as core 7 on socket 1 00:03:51.401 EAL: Detected lcore 44 as core 8 on socket 1 00:03:51.401 EAL: Detected lcore 45 as core 9 on socket 1 00:03:51.401 EAL: Detected lcore 46 as core 10 on socket 1 00:03:51.401 EAL: Detected lcore 47 as core 11 on socket 1 00:03:51.401 EAL: Detected lcore 48 as core 12 on socket 1 00:03:51.401 EAL: Detected lcore 49 as core 13 on socket 1 00:03:51.401 EAL: Detected lcore 50 as core 14 on socket 1 00:03:51.401 EAL: Detected lcore 51 as core 15 on socket 1 00:03:51.401 EAL: Detected lcore 52 as core 16 on socket 1 00:03:51.401 EAL: Detected lcore 53 as core 17 on socket 1 00:03:51.401 EAL: Detected lcore 54 as core 18 on socket 1 00:03:51.401 EAL: Detected lcore 55 as core 19 on socket 1 00:03:51.401 EAL: Detected lcore 56 as core 20 on socket 1 00:03:51.401 EAL: Detected lcore 57 as core 21 on socket 1 00:03:51.401 EAL: Detected lcore 58 as core 22 on socket 1 00:03:51.401 EAL: Detected lcore 59 as core 23 on socket 1 00:03:51.401 EAL: Detected lcore 60 as core 24 on socket 1 00:03:51.401 EAL: Detected lcore 61 as core 25 on socket 1 00:03:51.401 EAL: Detected lcore 62 as core 26 on socket 1 00:03:51.401 EAL: Detected lcore 63 as core 27 on socket 1 00:03:51.401 EAL: Detected lcore 64 as core 28 on socket 1 00:03:51.401 EAL: Detected lcore 65 as core 29 on socket 1 00:03:51.401 EAL: Detected lcore 66 as core 30 on socket 1 00:03:51.401 EAL: Detected lcore 67 as core 31 on socket 1 00:03:51.401 EAL: Detected lcore 68 as core 32 on socket 1 00:03:51.401 EAL: Detected lcore 69 as core 33 on socket 1 00:03:51.401 EAL: Detected lcore 70 as core 34 on socket 1 00:03:51.401 EAL: Detected lcore 71 as core 35 on socket 1 00:03:51.401 EAL: Detected lcore 72 as core 0 on socket 0 00:03:51.401 EAL: Detected lcore 73 as core 1 on socket 0 00:03:51.401 EAL: Detected lcore 74 as core 2 on socket 0 00:03:51.401 EAL: Detected lcore 75 as core 3 on socket 0 00:03:51.401 EAL: Detected lcore 76 as core 4 on socket 0 00:03:51.401 EAL: Detected lcore 77 as core 5 on socket 0 00:03:51.401 EAL: Detected lcore 78 as core 6 on socket 0 00:03:51.401 EAL: Detected lcore 79 as core 7 on socket 0 00:03:51.401 EAL: Detected lcore 80 as core 8 on socket 0 00:03:51.401 EAL: Detected lcore 81 as core 9 on socket 0 00:03:51.401 EAL: Detected lcore 82 as core 10 on socket 0 00:03:51.401 EAL: Detected lcore 83 as core 11 on socket 0 00:03:51.401 EAL: Detected lcore 84 as core 12 on socket 0 00:03:51.401 EAL: Detected lcore 85 as core 13 on socket 0 00:03:51.401 EAL: Detected lcore 86 as core 14 on socket 0 00:03:51.401 EAL: Detected lcore 87 as core 15 on socket 0 00:03:51.401 EAL: Detected lcore 88 as core 16 on socket 0 00:03:51.401 EAL: Detected lcore 89 as core 17 on socket 0 00:03:51.401 EAL: Detected lcore 90 as core 18 on socket 0 00:03:51.401 EAL: Detected lcore 91 as core 19 on socket 0 00:03:51.401 EAL: Detected lcore 92 as core 20 on socket 0 00:03:51.401 EAL: Detected lcore 93 as core 21 on socket 0 00:03:51.401 EAL: Detected lcore 94 as core 22 on socket 0 00:03:51.401 EAL: Detected lcore 95 as core 23 on socket 0 00:03:51.401 EAL: Detected lcore 96 as core 24 on socket 0 00:03:51.401 EAL: Detected lcore 97 as core 25 on socket 0 00:03:51.401 EAL: Detected lcore 98 as core 26 on socket 0 00:03:51.401 EAL: Detected lcore 99 as core 27 on socket 0 00:03:51.401 EAL: Detected lcore 100 as core 28 on socket 0 00:03:51.401 EAL: Detected lcore 101 as core 29 on socket 0 00:03:51.401 EAL: Detected lcore 102 as core 30 on socket 0 00:03:51.401 EAL: Detected lcore 103 as core 31 on socket 0 00:03:51.401 EAL: Detected lcore 104 as core 32 on socket 0 00:03:51.401 EAL: Detected lcore 105 as core 33 on socket 0 00:03:51.401 EAL: Detected lcore 106 as core 34 on socket 0 00:03:51.401 EAL: Detected lcore 107 as core 35 on socket 0 00:03:51.401 EAL: Detected lcore 108 as core 0 on socket 1 00:03:51.401 EAL: Detected lcore 109 as core 1 on socket 1 00:03:51.401 EAL: Detected lcore 110 as core 2 on socket 1 00:03:51.401 EAL: Detected lcore 111 as core 3 on socket 1 00:03:51.401 EAL: Detected lcore 112 as core 4 on socket 1 00:03:51.401 EAL: Detected lcore 113 as core 5 on socket 1 00:03:51.401 EAL: Detected lcore 114 as core 6 on socket 1 00:03:51.401 EAL: Detected lcore 115 as core 7 on socket 1 00:03:51.401 EAL: Detected lcore 116 as core 8 on socket 1 00:03:51.401 EAL: Detected lcore 117 as core 9 on socket 1 00:03:51.401 EAL: Detected lcore 118 as core 10 on socket 1 00:03:51.401 EAL: Detected lcore 119 as core 11 on socket 1 00:03:51.401 EAL: Detected lcore 120 as core 12 on socket 1 00:03:51.401 EAL: Detected lcore 121 as core 13 on socket 1 00:03:51.401 EAL: Detected lcore 122 as core 14 on socket 1 00:03:51.401 EAL: Detected lcore 123 as core 15 on socket 1 00:03:51.401 EAL: Detected lcore 124 as core 16 on socket 1 00:03:51.401 EAL: Detected lcore 125 as core 17 on socket 1 00:03:51.401 EAL: Detected lcore 126 as core 18 on socket 1 00:03:51.401 EAL: Detected lcore 127 as core 19 on socket 1 00:03:51.401 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:51.401 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:51.401 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:51.401 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:51.401 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:51.401 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:51.401 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:51.401 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:51.401 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:51.401 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:51.401 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:51.401 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:51.401 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:51.401 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:51.401 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:51.401 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:51.401 EAL: Maximum logical cores by configuration: 128 00:03:51.401 EAL: Detected CPU lcores: 128 00:03:51.401 EAL: Detected NUMA nodes: 2 00:03:51.401 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:51.401 EAL: Detected shared linkage of DPDK 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:03:51.401 EAL: Registered [vdev] bus. 00:03:51.401 EAL: bus.vdev log level changed from disabled to notice 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:03:51.401 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:03:51.401 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:51.401 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:51.401 EAL: No shared files mode enabled, IPC will be disabled 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Bus pci wants IOVA as 'DC' 00:03:51.663 EAL: Bus vdev wants IOVA as 'DC' 00:03:51.663 EAL: Buses did not request a specific IOVA mode. 00:03:51.663 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:51.663 EAL: Selected IOVA mode 'VA' 00:03:51.663 EAL: Probing VFIO support... 00:03:51.663 EAL: IOMMU type 1 (Type 1) is supported 00:03:51.663 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:51.663 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:51.663 EAL: VFIO support initialized 00:03:51.663 EAL: Ask a virtual area of 0x2e000 bytes 00:03:51.663 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:51.663 EAL: Setting up physically contiguous memory... 00:03:51.663 EAL: Setting maximum number of open files to 524288 00:03:51.663 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:51.663 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:51.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:51.663 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:51.663 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.663 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:51.663 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:51.663 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.663 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:51.663 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:51.663 EAL: Hugepages will be freed exactly as allocated. 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: TSC frequency is ~2400000 KHz 00:03:51.663 EAL: Main lcore 0 is ready (tid=7ff5091f8a00;cpuset=[0]) 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 0 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.663 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.663 00:03:51.663 00:03:51.663 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.663 http://cunit.sourceforge.net/ 00:03:51.663 00:03:51.663 00:03:51.663 Suite: components_suite 00:03:51.663 Test: vtophys_malloc_test ...passed 00:03:51.663 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 130MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 130MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.663 EAL: Restoring previous memory policy: 4 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was expanded by 258MB 00:03:51.663 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.663 EAL: request: mp_malloc_sync 00:03:51.663 EAL: No shared files mode enabled, IPC is disabled 00:03:51.663 EAL: Heap on socket 0 was shrunk by 258MB 00:03:51.663 EAL: Trying to obtain current memory policy. 00:03:51.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.924 EAL: Restoring previous memory policy: 4 00:03:51.924 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.924 EAL: request: mp_malloc_sync 00:03:51.924 EAL: No shared files mode enabled, IPC is disabled 00:03:51.924 EAL: Heap on socket 0 was expanded by 514MB 00:03:51.924 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.924 EAL: request: mp_malloc_sync 00:03:51.924 EAL: No shared files mode enabled, IPC is disabled 00:03:51.924 EAL: Heap on socket 0 was shrunk by 514MB 00:03:51.924 EAL: Trying to obtain current memory policy. 00:03:51.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.185 EAL: Restoring previous memory policy: 4 00:03:52.185 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.185 EAL: request: mp_malloc_sync 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 EAL: Heap on socket 0 was expanded by 1026MB 00:03:52.185 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.185 EAL: request: mp_malloc_sync 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:52.185 passed 00:03:52.185 00:03:52.185 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.185 suites 1 1 n/a 0 0 00:03:52.185 tests 2 2 2 0 0 00:03:52.185 asserts 497 497 497 0 n/a 00:03:52.185 00:03:52.185 Elapsed time = 0.701 seconds 00:03:52.185 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.185 EAL: request: mp_malloc_sync 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 EAL: Heap on socket 0 was shrunk by 2MB 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 EAL: No shared files mode enabled, IPC is disabled 00:03:52.185 00:03:52.185 real 0m0.834s 00:03:52.185 user 0m0.433s 00:03:52.185 sys 0m0.379s 00:03:52.447 21:52:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.447 21:52:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:52.447 ************************************ 00:03:52.447 END TEST env_vtophys 00:03:52.447 ************************************ 00:03:52.447 21:52:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.447 21:52:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.447 21:52:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.447 21:52:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.447 ************************************ 00:03:52.447 START TEST env_pci 00:03:52.447 ************************************ 00:03:52.447 21:52:10 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.447 00:03:52.447 00:03:52.447 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.447 http://cunit.sourceforge.net/ 00:03:52.447 00:03:52.447 00:03:52.447 Suite: pci 00:03:52.447 Test: pci_hook ...[2024-10-12 21:52:10.768764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3247261 has claimed it 00:03:52.447 EAL: Cannot find device (10000:00:01.0) 00:03:52.447 EAL: Failed to attach device on primary process 00:03:52.447 passed 00:03:52.447 00:03:52.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.447 suites 1 1 n/a 0 0 00:03:52.447 tests 1 1 1 0 0 00:03:52.447 asserts 25 25 25 0 n/a 00:03:52.447 00:03:52.447 Elapsed time = 0.031 seconds 00:03:52.447 00:03:52.447 real 0m0.051s 00:03:52.447 user 0m0.014s 00:03:52.447 sys 0m0.036s 00:03:52.447 21:52:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.447 21:52:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:52.447 ************************************ 00:03:52.447 END TEST env_pci 00:03:52.447 ************************************ 00:03:52.447 21:52:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:52.447 21:52:10 env -- env/env.sh@15 -- # uname 00:03:52.447 21:52:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:52.447 21:52:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:52.447 21:52:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.447 21:52:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:52.447 21:52:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.447 21:52:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.447 ************************************ 00:03:52.447 START TEST env_dpdk_post_init 00:03:52.447 ************************************ 00:03:52.447 21:52:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.447 EAL: Detected CPU lcores: 128 00:03:52.447 EAL: Detected NUMA nodes: 2 00:03:52.447 EAL: Detected shared linkage of DPDK 00:03:52.447 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.709 EAL: Selected IOVA mode 'VA' 00:03:52.709 EAL: VFIO support initialized 00:03:52.709 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.709 EAL: Using IOMMU type 1 (Type 1) 00:03:52.709 EAL: Ignore mapping IO port bar(1) 00:03:52.970 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:52.970 EAL: Ignore mapping IO port bar(1) 00:03:53.232 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:53.232 EAL: Ignore mapping IO port bar(1) 00:03:53.232 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:53.493 EAL: Ignore mapping IO port bar(1) 00:03:53.493 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:53.753 EAL: Ignore mapping IO port bar(1) 00:03:53.753 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:54.013 EAL: Ignore mapping IO port bar(1) 00:03:54.013 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:54.013 EAL: Ignore mapping IO port bar(1) 00:03:54.274 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:54.274 EAL: Ignore mapping IO port bar(1) 00:03:54.536 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:54.798 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:54.798 EAL: Ignore mapping IO port bar(1) 00:03:54.798 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:55.059 EAL: Ignore mapping IO port bar(1) 00:03:55.059 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:55.320 EAL: Ignore mapping IO port bar(1) 00:03:55.320 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:55.582 EAL: Ignore mapping IO port bar(1) 00:03:55.582 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:55.582 EAL: Ignore mapping IO port bar(1) 00:03:55.843 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:55.843 EAL: Ignore mapping IO port bar(1) 00:03:56.104 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:56.104 EAL: Ignore mapping IO port bar(1) 00:03:56.365 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:56.365 EAL: Ignore mapping IO port bar(1) 00:03:56.365 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:56.365 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:56.365 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:56.626 Starting DPDK initialization... 00:03:56.626 Starting SPDK post initialization... 00:03:56.626 SPDK NVMe probe 00:03:56.626 Attaching to 0000:65:00.0 00:03:56.626 Attached to 0000:65:00.0 00:03:56.626 Cleaning up... 00:03:58.544 00:03:58.544 real 0m5.735s 00:03:58.544 user 0m0.182s 00:03:58.544 sys 0m0.107s 00:03:58.544 21:52:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.544 21:52:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 END TEST env_dpdk_post_init 00:03:58.544 ************************************ 00:03:58.544 21:52:16 env -- env/env.sh@26 -- # uname 00:03:58.544 21:52:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:58.544 21:52:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.544 21:52:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.544 21:52:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.544 21:52:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 START TEST env_mem_callbacks 00:03:58.544 ************************************ 00:03:58.544 21:52:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:58.544 EAL: Detected CPU lcores: 128 00:03:58.544 EAL: Detected NUMA nodes: 2 00:03:58.544 EAL: Detected shared linkage of DPDK 00:03:58.544 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.544 EAL: Selected IOVA mode 'VA' 00:03:58.544 EAL: VFIO support initialized 00:03:58.544 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.544 00:03:58.544 00:03:58.544 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.544 http://cunit.sourceforge.net/ 00:03:58.544 00:03:58.544 00:03:58.544 Suite: memory 00:03:58.544 Test: test ... 00:03:58.544 register 0x200000200000 2097152 00:03:58.544 malloc 3145728 00:03:58.544 register 0x200000400000 4194304 00:03:58.544 buf 0x200000500000 len 3145728 PASSED 00:03:58.544 malloc 64 00:03:58.544 buf 0x2000004fff40 len 64 PASSED 00:03:58.544 malloc 4194304 00:03:58.544 register 0x200000800000 6291456 00:03:58.544 buf 0x200000a00000 len 4194304 PASSED 00:03:58.544 free 0x200000500000 3145728 00:03:58.544 free 0x2000004fff40 64 00:03:58.544 unregister 0x200000400000 4194304 PASSED 00:03:58.544 free 0x200000a00000 4194304 00:03:58.544 unregister 0x200000800000 6291456 PASSED 00:03:58.544 malloc 8388608 00:03:58.544 register 0x200000400000 10485760 00:03:58.544 buf 0x200000600000 len 8388608 PASSED 00:03:58.544 free 0x200000600000 8388608 00:03:58.544 unregister 0x200000400000 10485760 PASSED 00:03:58.544 passed 00:03:58.544 00:03:58.544 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.544 suites 1 1 n/a 0 0 00:03:58.544 tests 1 1 1 0 0 00:03:58.544 asserts 15 15 15 0 n/a 00:03:58.544 00:03:58.544 Elapsed time = 0.010 seconds 00:03:58.544 00:03:58.544 real 0m0.067s 00:03:58.544 user 0m0.025s 00:03:58.544 sys 0m0.042s 00:03:58.544 21:52:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.544 21:52:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 END TEST env_mem_callbacks 00:03:58.544 ************************************ 00:03:58.544 00:03:58.544 real 0m7.513s 00:03:58.544 user 0m1.121s 00:03:58.544 sys 0m0.956s 00:03:58.544 21:52:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.544 21:52:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 END TEST env 00:03:58.544 ************************************ 00:03:58.544 21:52:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.544 21:52:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.544 21:52:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.544 21:52:16 -- common/autotest_common.sh@10 -- # set +x 00:03:58.544 ************************************ 00:03:58.544 START TEST rpc 00:03:58.544 ************************************ 00:03:58.544 21:52:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:58.544 * Looking for test storage... 00:03:58.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.545 21:52:17 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:58.545 21:52:17 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:58.545 21:52:17 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:58.841 21:52:17 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.841 21:52:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.841 21:52:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.841 21:52:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.841 21:52:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.841 21:52:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.841 21:52:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:58.841 21:52:17 rpc -- scripts/common.sh@345 -- # : 1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.841 21:52:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.841 21:52:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@353 -- # local d=1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.841 21:52:17 rpc -- scripts/common.sh@355 -- # echo 1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.841 21:52:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@353 -- # local d=2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.841 21:52:17 rpc -- scripts/common.sh@355 -- # echo 2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.841 21:52:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.841 21:52:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.841 21:52:17 rpc -- scripts/common.sh@368 -- # return 0 00:03:58.841 21:52:17 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.841 21:52:17 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:58.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.841 --rc genhtml_branch_coverage=1 00:03:58.841 --rc genhtml_function_coverage=1 00:03:58.841 --rc genhtml_legend=1 00:03:58.841 --rc geninfo_all_blocks=1 00:03:58.842 --rc geninfo_unexecuted_blocks=1 00:03:58.842 00:03:58.842 ' 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.842 --rc genhtml_branch_coverage=1 00:03:58.842 --rc genhtml_function_coverage=1 00:03:58.842 --rc genhtml_legend=1 00:03:58.842 --rc geninfo_all_blocks=1 00:03:58.842 --rc geninfo_unexecuted_blocks=1 00:03:58.842 00:03:58.842 ' 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.842 --rc genhtml_branch_coverage=1 00:03:58.842 --rc genhtml_function_coverage=1 00:03:58.842 --rc genhtml_legend=1 00:03:58.842 --rc geninfo_all_blocks=1 00:03:58.842 --rc geninfo_unexecuted_blocks=1 00:03:58.842 00:03:58.842 ' 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.842 --rc genhtml_branch_coverage=1 00:03:58.842 --rc genhtml_function_coverage=1 00:03:58.842 --rc genhtml_legend=1 00:03:58.842 --rc geninfo_all_blocks=1 00:03:58.842 --rc geninfo_unexecuted_blocks=1 00:03:58.842 00:03:58.842 ' 00:03:58.842 21:52:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3248612 00:03:58.842 21:52:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.842 21:52:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3248612 00:03:58.842 21:52:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@831 -- # '[' -z 3248612 ']' 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.842 21:52:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.842 [2024-10-12 21:52:17.174393] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:03:58.842 [2024-10-12 21:52:17.174465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248612 ] 00:03:58.842 [2024-10-12 21:52:17.257444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.842 [2024-10-12 21:52:17.304833] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:58.842 [2024-10-12 21:52:17.304885] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3248612' to capture a snapshot of events at runtime. 00:03:58.842 [2024-10-12 21:52:17.304895] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:58.842 [2024-10-12 21:52:17.304903] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:58.842 [2024-10-12 21:52:17.304909] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3248612 for offline analysis/debug. 00:03:58.842 [2024-10-12 21:52:17.304941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.878 21:52:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:59.878 21:52:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:59.878 21:52:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.878 21:52:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.878 21:52:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:59.878 21:52:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:59.878 21:52:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.878 21:52:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.878 21:52:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 ************************************ 00:03:59.878 START TEST rpc_integrity 00:03:59.878 ************************************ 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.878 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.878 { 00:03:59.878 "name": "Malloc0", 00:03:59.878 "aliases": [ 00:03:59.878 "712feefc-d225-4a91-aabe-5436376b0324" 00:03:59.878 ], 00:03:59.878 "product_name": "Malloc disk", 00:03:59.878 "block_size": 512, 00:03:59.878 "num_blocks": 16384, 00:03:59.878 "uuid": "712feefc-d225-4a91-aabe-5436376b0324", 00:03:59.878 "assigned_rate_limits": { 00:03:59.878 "rw_ios_per_sec": 0, 00:03:59.878 "rw_mbytes_per_sec": 0, 00:03:59.878 "r_mbytes_per_sec": 0, 00:03:59.878 "w_mbytes_per_sec": 0 00:03:59.878 }, 00:03:59.878 "claimed": false, 00:03:59.878 "zoned": false, 00:03:59.878 "supported_io_types": { 00:03:59.878 "read": true, 00:03:59.878 "write": true, 00:03:59.878 "unmap": true, 00:03:59.878 "flush": true, 00:03:59.878 "reset": true, 00:03:59.878 "nvme_admin": false, 00:03:59.878 "nvme_io": false, 00:03:59.878 "nvme_io_md": false, 00:03:59.878 "write_zeroes": true, 00:03:59.878 "zcopy": true, 00:03:59.878 "get_zone_info": false, 00:03:59.878 "zone_management": false, 00:03:59.879 "zone_append": false, 00:03:59.879 "compare": false, 00:03:59.879 "compare_and_write": false, 00:03:59.879 "abort": true, 00:03:59.879 "seek_hole": false, 00:03:59.879 "seek_data": false, 00:03:59.879 "copy": true, 00:03:59.879 "nvme_iov_md": false 00:03:59.879 }, 00:03:59.879 "memory_domains": [ 00:03:59.879 { 00:03:59.879 "dma_device_id": "system", 00:03:59.879 "dma_device_type": 1 00:03:59.879 }, 00:03:59.879 { 00:03:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.879 "dma_device_type": 2 00:03:59.879 } 00:03:59.879 ], 00:03:59.879 "driver_specific": {} 00:03:59.879 } 00:03:59.879 ]' 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 [2024-10-12 21:52:18.163721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:59.879 [2024-10-12 21:52:18.163767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.879 [2024-10-12 21:52:18.163783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x662da0 00:03:59.879 [2024-10-12 21:52:18.163791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.879 [2024-10-12 21:52:18.165325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.879 [2024-10-12 21:52:18.165363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.879 Passthru0 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.879 { 00:03:59.879 "name": "Malloc0", 00:03:59.879 "aliases": [ 00:03:59.879 "712feefc-d225-4a91-aabe-5436376b0324" 00:03:59.879 ], 00:03:59.879 "product_name": "Malloc disk", 00:03:59.879 "block_size": 512, 00:03:59.879 "num_blocks": 16384, 00:03:59.879 "uuid": "712feefc-d225-4a91-aabe-5436376b0324", 00:03:59.879 "assigned_rate_limits": { 00:03:59.879 "rw_ios_per_sec": 0, 00:03:59.879 "rw_mbytes_per_sec": 0, 00:03:59.879 "r_mbytes_per_sec": 0, 00:03:59.879 "w_mbytes_per_sec": 0 00:03:59.879 }, 00:03:59.879 "claimed": true, 00:03:59.879 "claim_type": "exclusive_write", 00:03:59.879 "zoned": false, 00:03:59.879 "supported_io_types": { 00:03:59.879 "read": true, 00:03:59.879 "write": true, 00:03:59.879 "unmap": true, 00:03:59.879 "flush": true, 00:03:59.879 "reset": true, 00:03:59.879 "nvme_admin": false, 00:03:59.879 "nvme_io": false, 00:03:59.879 "nvme_io_md": false, 00:03:59.879 "write_zeroes": true, 00:03:59.879 "zcopy": true, 00:03:59.879 "get_zone_info": false, 00:03:59.879 "zone_management": false, 00:03:59.879 "zone_append": false, 00:03:59.879 "compare": false, 00:03:59.879 "compare_and_write": false, 00:03:59.879 "abort": true, 00:03:59.879 "seek_hole": false, 00:03:59.879 "seek_data": false, 00:03:59.879 "copy": true, 00:03:59.879 "nvme_iov_md": false 00:03:59.879 }, 00:03:59.879 "memory_domains": [ 00:03:59.879 { 00:03:59.879 "dma_device_id": "system", 00:03:59.879 "dma_device_type": 1 00:03:59.879 }, 00:03:59.879 { 00:03:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.879 "dma_device_type": 2 00:03:59.879 } 00:03:59.879 ], 00:03:59.879 "driver_specific": {} 00:03:59.879 }, 00:03:59.879 { 00:03:59.879 "name": "Passthru0", 00:03:59.879 "aliases": [ 00:03:59.879 "34af2e47-c54d-569e-9c32-e3f1de64c4c4" 00:03:59.879 ], 00:03:59.879 "product_name": "passthru", 00:03:59.879 "block_size": 512, 00:03:59.879 "num_blocks": 16384, 00:03:59.879 "uuid": "34af2e47-c54d-569e-9c32-e3f1de64c4c4", 00:03:59.879 "assigned_rate_limits": { 00:03:59.879 "rw_ios_per_sec": 0, 00:03:59.879 "rw_mbytes_per_sec": 0, 00:03:59.879 "r_mbytes_per_sec": 0, 00:03:59.879 "w_mbytes_per_sec": 0 00:03:59.879 }, 00:03:59.879 "claimed": false, 00:03:59.879 "zoned": false, 00:03:59.879 "supported_io_types": { 00:03:59.879 "read": true, 00:03:59.879 "write": true, 00:03:59.879 "unmap": true, 00:03:59.879 "flush": true, 00:03:59.879 "reset": true, 00:03:59.879 "nvme_admin": false, 00:03:59.879 "nvme_io": false, 00:03:59.879 "nvme_io_md": false, 00:03:59.879 "write_zeroes": true, 00:03:59.879 "zcopy": true, 00:03:59.879 "get_zone_info": false, 00:03:59.879 "zone_management": false, 00:03:59.879 "zone_append": false, 00:03:59.879 "compare": false, 00:03:59.879 "compare_and_write": false, 00:03:59.879 "abort": true, 00:03:59.879 "seek_hole": false, 00:03:59.879 "seek_data": false, 00:03:59.879 "copy": true, 00:03:59.879 "nvme_iov_md": false 00:03:59.879 }, 00:03:59.879 "memory_domains": [ 00:03:59.879 { 00:03:59.879 "dma_device_id": "system", 00:03:59.879 "dma_device_type": 1 00:03:59.879 }, 00:03:59.879 { 00:03:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.879 "dma_device_type": 2 00:03:59.879 } 00:03:59.879 ], 00:03:59.879 "driver_specific": { 00:03:59.879 "passthru": { 00:03:59.879 "name": "Passthru0", 00:03:59.879 "base_bdev_name": "Malloc0" 00:03:59.879 } 00:03:59.879 } 00:03:59.879 } 00:03:59.879 ]' 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.879 21:52:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.879 00:03:59.879 real 0m0.316s 00:03:59.879 user 0m0.185s 00:03:59.879 sys 0m0.056s 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.879 21:52:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.879 ************************************ 00:03:59.879 END TEST rpc_integrity 00:03:59.879 ************************************ 00:04:00.160 21:52:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.160 21:52:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.160 21:52:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.160 21:52:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.160 ************************************ 00:04:00.160 START TEST rpc_plugins 00:04:00.160 ************************************ 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:00.160 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.160 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.160 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.160 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.160 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.160 { 00:04:00.160 "name": "Malloc1", 00:04:00.160 "aliases": [ 00:04:00.161 "eb724406-7893-497b-8c46-6d27d28dc1a9" 00:04:00.161 ], 00:04:00.161 "product_name": "Malloc disk", 00:04:00.161 "block_size": 4096, 00:04:00.161 "num_blocks": 256, 00:04:00.161 "uuid": "eb724406-7893-497b-8c46-6d27d28dc1a9", 00:04:00.161 "assigned_rate_limits": { 00:04:00.161 "rw_ios_per_sec": 0, 00:04:00.161 "rw_mbytes_per_sec": 0, 00:04:00.161 "r_mbytes_per_sec": 0, 00:04:00.161 "w_mbytes_per_sec": 0 00:04:00.161 }, 00:04:00.161 "claimed": false, 00:04:00.161 "zoned": false, 00:04:00.161 "supported_io_types": { 00:04:00.161 "read": true, 00:04:00.161 "write": true, 00:04:00.161 "unmap": true, 00:04:00.161 "flush": true, 00:04:00.161 "reset": true, 00:04:00.161 "nvme_admin": false, 00:04:00.161 "nvme_io": false, 00:04:00.161 "nvme_io_md": false, 00:04:00.161 "write_zeroes": true, 00:04:00.161 "zcopy": true, 00:04:00.161 "get_zone_info": false, 00:04:00.161 "zone_management": false, 00:04:00.161 "zone_append": false, 00:04:00.161 "compare": false, 00:04:00.161 "compare_and_write": false, 00:04:00.161 "abort": true, 00:04:00.161 "seek_hole": false, 00:04:00.161 "seek_data": false, 00:04:00.161 "copy": true, 00:04:00.161 "nvme_iov_md": false 00:04:00.161 }, 00:04:00.161 "memory_domains": [ 00:04:00.161 { 00:04:00.161 "dma_device_id": "system", 00:04:00.161 "dma_device_type": 1 00:04:00.161 }, 00:04:00.161 { 00:04:00.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.161 "dma_device_type": 2 00:04:00.161 } 00:04:00.161 ], 00:04:00.161 "driver_specific": {} 00:04:00.161 } 00:04:00.161 ]' 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:00.161 21:52:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.161 00:04:00.161 real 0m0.152s 00:04:00.161 user 0m0.092s 00:04:00.161 sys 0m0.024s 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.161 21:52:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.161 ************************************ 00:04:00.161 END TEST rpc_plugins 00:04:00.161 ************************************ 00:04:00.161 21:52:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.161 21:52:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.161 21:52:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.161 21:52:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.161 ************************************ 00:04:00.161 START TEST rpc_trace_cmd_test 00:04:00.161 ************************************ 00:04:00.161 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:00.161 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:00.161 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.161 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.161 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:00.422 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3248612", 00:04:00.422 "tpoint_group_mask": "0x8", 00:04:00.422 "iscsi_conn": { 00:04:00.422 "mask": "0x2", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "scsi": { 00:04:00.422 "mask": "0x4", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "bdev": { 00:04:00.422 "mask": "0x8", 00:04:00.422 "tpoint_mask": "0xffffffffffffffff" 00:04:00.422 }, 00:04:00.422 "nvmf_rdma": { 00:04:00.422 "mask": "0x10", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "nvmf_tcp": { 00:04:00.422 "mask": "0x20", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "ftl": { 00:04:00.422 "mask": "0x40", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "blobfs": { 00:04:00.422 "mask": "0x80", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "dsa": { 00:04:00.422 "mask": "0x200", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "thread": { 00:04:00.422 "mask": "0x400", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "nvme_pcie": { 00:04:00.422 "mask": "0x800", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "iaa": { 00:04:00.422 "mask": "0x1000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "nvme_tcp": { 00:04:00.422 "mask": "0x2000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "bdev_nvme": { 00:04:00.422 "mask": "0x4000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "sock": { 00:04:00.422 "mask": "0x8000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "blob": { 00:04:00.422 "mask": "0x10000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 }, 00:04:00.422 "bdev_raid": { 00:04:00.422 "mask": "0x20000", 00:04:00.422 "tpoint_mask": "0x0" 00:04:00.422 } 00:04:00.422 }' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.422 00:04:00.422 real 0m0.251s 00:04:00.422 user 0m0.207s 00:04:00.422 sys 0m0.036s 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.422 21:52:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.422 ************************************ 00:04:00.422 END TEST rpc_trace_cmd_test 00:04:00.422 ************************************ 00:04:00.683 21:52:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.683 21:52:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.683 21:52:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.683 21:52:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.683 21:52:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.683 21:52:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 ************************************ 00:04:00.683 START TEST rpc_daemon_integrity 00:04:00.683 ************************************ 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.683 21:52:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.683 { 00:04:00.683 "name": "Malloc2", 00:04:00.683 "aliases": [ 00:04:00.683 "0cae71d6-5264-448d-9682-0c9d0787e24d" 00:04:00.683 ], 00:04:00.683 "product_name": "Malloc disk", 00:04:00.683 "block_size": 512, 00:04:00.683 "num_blocks": 16384, 00:04:00.683 "uuid": "0cae71d6-5264-448d-9682-0c9d0787e24d", 00:04:00.683 "assigned_rate_limits": { 00:04:00.683 "rw_ios_per_sec": 0, 00:04:00.683 "rw_mbytes_per_sec": 0, 00:04:00.683 "r_mbytes_per_sec": 0, 00:04:00.683 "w_mbytes_per_sec": 0 00:04:00.683 }, 00:04:00.683 "claimed": false, 00:04:00.683 "zoned": false, 00:04:00.683 "supported_io_types": { 00:04:00.683 "read": true, 00:04:00.683 "write": true, 00:04:00.683 "unmap": true, 00:04:00.683 "flush": true, 00:04:00.683 "reset": true, 00:04:00.683 "nvme_admin": false, 00:04:00.683 "nvme_io": false, 00:04:00.683 "nvme_io_md": false, 00:04:00.683 "write_zeroes": true, 00:04:00.683 "zcopy": true, 00:04:00.683 "get_zone_info": false, 00:04:00.683 "zone_management": false, 00:04:00.683 "zone_append": false, 00:04:00.683 "compare": false, 00:04:00.683 "compare_and_write": false, 00:04:00.683 "abort": true, 00:04:00.683 "seek_hole": false, 00:04:00.683 "seek_data": false, 00:04:00.683 "copy": true, 00:04:00.683 "nvme_iov_md": false 00:04:00.683 }, 00:04:00.683 "memory_domains": [ 00:04:00.683 { 00:04:00.683 "dma_device_id": "system", 00:04:00.683 "dma_device_type": 1 00:04:00.683 }, 00:04:00.683 { 00:04:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.683 "dma_device_type": 2 00:04:00.683 } 00:04:00.683 ], 00:04:00.683 "driver_specific": {} 00:04:00.683 } 00:04:00.683 ]' 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 [2024-10-12 21:52:19.118304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:00.683 [2024-10-12 21:52:19.118351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.683 [2024-10-12 21:52:19.118365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x666520 00:04:00.683 [2024-10-12 21:52:19.118373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.683 [2024-10-12 21:52:19.119833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.683 [2024-10-12 21:52:19.119870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.683 Passthru0 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.683 { 00:04:00.683 "name": "Malloc2", 00:04:00.683 "aliases": [ 00:04:00.683 "0cae71d6-5264-448d-9682-0c9d0787e24d" 00:04:00.683 ], 00:04:00.683 "product_name": "Malloc disk", 00:04:00.683 "block_size": 512, 00:04:00.683 "num_blocks": 16384, 00:04:00.683 "uuid": "0cae71d6-5264-448d-9682-0c9d0787e24d", 00:04:00.683 "assigned_rate_limits": { 00:04:00.683 "rw_ios_per_sec": 0, 00:04:00.683 "rw_mbytes_per_sec": 0, 00:04:00.683 "r_mbytes_per_sec": 0, 00:04:00.683 "w_mbytes_per_sec": 0 00:04:00.683 }, 00:04:00.683 "claimed": true, 00:04:00.683 "claim_type": "exclusive_write", 00:04:00.683 "zoned": false, 00:04:00.683 "supported_io_types": { 00:04:00.683 "read": true, 00:04:00.683 "write": true, 00:04:00.683 "unmap": true, 00:04:00.683 "flush": true, 00:04:00.683 "reset": true, 00:04:00.683 "nvme_admin": false, 00:04:00.683 "nvme_io": false, 00:04:00.683 "nvme_io_md": false, 00:04:00.683 "write_zeroes": true, 00:04:00.683 "zcopy": true, 00:04:00.683 "get_zone_info": false, 00:04:00.683 "zone_management": false, 00:04:00.683 "zone_append": false, 00:04:00.683 "compare": false, 00:04:00.683 "compare_and_write": false, 00:04:00.683 "abort": true, 00:04:00.683 "seek_hole": false, 00:04:00.683 "seek_data": false, 00:04:00.683 "copy": true, 00:04:00.683 "nvme_iov_md": false 00:04:00.683 }, 00:04:00.683 "memory_domains": [ 00:04:00.683 { 00:04:00.683 "dma_device_id": "system", 00:04:00.683 "dma_device_type": 1 00:04:00.683 }, 00:04:00.683 { 00:04:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.683 "dma_device_type": 2 00:04:00.683 } 00:04:00.683 ], 00:04:00.683 "driver_specific": {} 00:04:00.683 }, 00:04:00.683 { 00:04:00.683 "name": "Passthru0", 00:04:00.683 "aliases": [ 00:04:00.683 "2c2bbe9f-886b-5177-aad5-0da0baa18b43" 00:04:00.683 ], 00:04:00.683 "product_name": "passthru", 00:04:00.683 "block_size": 512, 00:04:00.683 "num_blocks": 16384, 00:04:00.683 "uuid": "2c2bbe9f-886b-5177-aad5-0da0baa18b43", 00:04:00.683 "assigned_rate_limits": { 00:04:00.683 "rw_ios_per_sec": 0, 00:04:00.683 "rw_mbytes_per_sec": 0, 00:04:00.683 "r_mbytes_per_sec": 0, 00:04:00.683 "w_mbytes_per_sec": 0 00:04:00.683 }, 00:04:00.683 "claimed": false, 00:04:00.683 "zoned": false, 00:04:00.683 "supported_io_types": { 00:04:00.683 "read": true, 00:04:00.683 "write": true, 00:04:00.683 "unmap": true, 00:04:00.683 "flush": true, 00:04:00.683 "reset": true, 00:04:00.683 "nvme_admin": false, 00:04:00.683 "nvme_io": false, 00:04:00.683 "nvme_io_md": false, 00:04:00.683 "write_zeroes": true, 00:04:00.683 "zcopy": true, 00:04:00.683 "get_zone_info": false, 00:04:00.683 "zone_management": false, 00:04:00.683 "zone_append": false, 00:04:00.683 "compare": false, 00:04:00.683 "compare_and_write": false, 00:04:00.683 "abort": true, 00:04:00.683 "seek_hole": false, 00:04:00.683 "seek_data": false, 00:04:00.683 "copy": true, 00:04:00.683 "nvme_iov_md": false 00:04:00.683 }, 00:04:00.683 "memory_domains": [ 00:04:00.683 { 00:04:00.683 "dma_device_id": "system", 00:04:00.683 "dma_device_type": 1 00:04:00.683 }, 00:04:00.683 { 00:04:00.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.683 "dma_device_type": 2 00:04:00.683 } 00:04:00.683 ], 00:04:00.683 "driver_specific": { 00:04:00.683 "passthru": { 00:04:00.683 "name": "Passthru0", 00:04:00.683 "base_bdev_name": "Malloc2" 00:04:00.683 } 00:04:00.683 } 00:04:00.683 } 00:04:00.683 ]' 00:04:00.683 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.944 00:04:00.944 real 0m0.299s 00:04:00.944 user 0m0.187s 00:04:00.944 sys 0m0.045s 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.944 21:52:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.944 ************************************ 00:04:00.944 END TEST rpc_daemon_integrity 00:04:00.944 ************************************ 00:04:00.944 21:52:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:00.944 21:52:19 rpc -- rpc/rpc.sh@84 -- # killprocess 3248612 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@950 -- # '[' -z 3248612 ']' 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@954 -- # kill -0 3248612 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@955 -- # uname 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3248612 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:00.944 21:52:19 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3248612' 00:04:00.944 killing process with pid 3248612 00:04:00.945 21:52:19 rpc -- common/autotest_common.sh@969 -- # kill 3248612 00:04:00.945 21:52:19 rpc -- common/autotest_common.sh@974 -- # wait 3248612 00:04:01.205 00:04:01.205 real 0m2.727s 00:04:01.205 user 0m3.467s 00:04:01.205 sys 0m0.843s 00:04:01.205 21:52:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.205 21:52:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.205 ************************************ 00:04:01.205 END TEST rpc 00:04:01.205 ************************************ 00:04:01.205 21:52:19 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.205 21:52:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.205 21:52:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.205 21:52:19 -- common/autotest_common.sh@10 -- # set +x 00:04:01.466 ************************************ 00:04:01.466 START TEST skip_rpc 00:04:01.466 ************************************ 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.466 * Looking for test storage... 00:04:01.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.466 21:52:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.466 21:52:19 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:01.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.467 --rc genhtml_branch_coverage=1 00:04:01.467 --rc genhtml_function_coverage=1 00:04:01.467 --rc genhtml_legend=1 00:04:01.467 --rc geninfo_all_blocks=1 00:04:01.467 --rc geninfo_unexecuted_blocks=1 00:04:01.467 00:04:01.467 ' 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:01.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.467 --rc genhtml_branch_coverage=1 00:04:01.467 --rc genhtml_function_coverage=1 00:04:01.467 --rc genhtml_legend=1 00:04:01.467 --rc geninfo_all_blocks=1 00:04:01.467 --rc geninfo_unexecuted_blocks=1 00:04:01.467 00:04:01.467 ' 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:01.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.467 --rc genhtml_branch_coverage=1 00:04:01.467 --rc genhtml_function_coverage=1 00:04:01.467 --rc genhtml_legend=1 00:04:01.467 --rc geninfo_all_blocks=1 00:04:01.467 --rc geninfo_unexecuted_blocks=1 00:04:01.467 00:04:01.467 ' 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:01.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.467 --rc genhtml_branch_coverage=1 00:04:01.467 --rc genhtml_function_coverage=1 00:04:01.467 --rc genhtml_legend=1 00:04:01.467 --rc geninfo_all_blocks=1 00:04:01.467 --rc geninfo_unexecuted_blocks=1 00:04:01.467 00:04:01.467 ' 00:04:01.467 21:52:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.467 21:52:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.467 21:52:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.467 21:52:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.467 ************************************ 00:04:01.467 START TEST skip_rpc 00:04:01.467 ************************************ 00:04:01.467 21:52:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:01.467 21:52:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3249473 00:04:01.467 21:52:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.467 21:52:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.467 21:52:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.727 [2024-10-12 21:52:20.014741] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:01.727 [2024-10-12 21:52:20.014807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249473 ] 00:04:01.727 [2024-10-12 21:52:20.098350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.727 [2024-10-12 21:52:20.152405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3249473 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3249473 ']' 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3249473 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.016 21:52:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3249473 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3249473' 00:04:07.016 killing process with pid 3249473 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3249473 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3249473 00:04:07.016 00:04:07.016 real 0m5.267s 00:04:07.016 user 0m5.013s 00:04:07.016 sys 0m0.295s 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.016 21:52:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.016 ************************************ 00:04:07.016 END TEST skip_rpc 00:04:07.016 ************************************ 00:04:07.016 21:52:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.016 21:52:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.016 21:52:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.016 21:52:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.016 ************************************ 00:04:07.016 START TEST skip_rpc_with_json 00:04:07.016 ************************************ 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3250507 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3250507 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3250507 ']' 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.016 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.017 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.017 21:52:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.017 [2024-10-12 21:52:25.359931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:07.017 [2024-10-12 21:52:25.359985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250507 ] 00:04:07.017 [2024-10-12 21:52:25.437484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.017 [2024-10-12 21:52:25.478976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.958 [2024-10-12 21:52:26.152691] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.958 request: 00:04:07.958 { 00:04:07.958 "trtype": "tcp", 00:04:07.958 "method": "nvmf_get_transports", 00:04:07.958 "req_id": 1 00:04:07.958 } 00:04:07.958 Got JSON-RPC error response 00:04:07.958 response: 00:04:07.958 { 00:04:07.958 "code": -19, 00:04:07.958 "message": "No such device" 00:04:07.958 } 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.958 [2024-10-12 21:52:26.164786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.958 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:07.958 { 00:04:07.958 "subsystems": [ 00:04:07.958 { 00:04:07.958 "subsystem": "fsdev", 00:04:07.958 "config": [ 00:04:07.958 { 00:04:07.958 "method": "fsdev_set_opts", 00:04:07.958 "params": { 00:04:07.958 "fsdev_io_pool_size": 65535, 00:04:07.958 "fsdev_io_cache_size": 256 00:04:07.958 } 00:04:07.958 } 00:04:07.958 ] 00:04:07.958 }, 00:04:07.958 { 00:04:07.958 "subsystem": "vfio_user_target", 00:04:07.958 "config": null 00:04:07.958 }, 00:04:07.958 { 00:04:07.958 "subsystem": "keyring", 00:04:07.958 "config": [] 00:04:07.958 }, 00:04:07.958 { 00:04:07.958 "subsystem": "iobuf", 00:04:07.958 "config": [ 00:04:07.958 { 00:04:07.958 "method": "iobuf_set_options", 00:04:07.958 "params": { 00:04:07.958 "small_pool_count": 8192, 00:04:07.958 "large_pool_count": 1024, 00:04:07.958 "small_bufsize": 8192, 00:04:07.958 "large_bufsize": 135168 00:04:07.958 } 00:04:07.958 } 00:04:07.958 ] 00:04:07.958 }, 00:04:07.958 { 00:04:07.958 "subsystem": "sock", 00:04:07.958 "config": [ 00:04:07.958 { 00:04:07.958 "method": "sock_set_default_impl", 00:04:07.958 "params": { 00:04:07.958 "impl_name": "posix" 00:04:07.958 } 00:04:07.958 }, 00:04:07.958 { 00:04:07.958 "method": "sock_impl_set_options", 00:04:07.958 "params": { 00:04:07.958 "impl_name": "ssl", 00:04:07.958 "recv_buf_size": 4096, 00:04:07.958 "send_buf_size": 4096, 00:04:07.958 "enable_recv_pipe": true, 00:04:07.958 "enable_quickack": false, 00:04:07.958 "enable_placement_id": 0, 00:04:07.958 "enable_zerocopy_send_server": true, 00:04:07.958 "enable_zerocopy_send_client": false, 00:04:07.958 "zerocopy_threshold": 0, 00:04:07.958 "tls_version": 0, 00:04:07.958 "enable_ktls": false 00:04:07.958 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "sock_impl_set_options", 00:04:07.959 "params": { 00:04:07.959 "impl_name": "posix", 00:04:07.959 "recv_buf_size": 2097152, 00:04:07.959 "send_buf_size": 2097152, 00:04:07.959 "enable_recv_pipe": true, 00:04:07.959 "enable_quickack": false, 00:04:07.959 "enable_placement_id": 0, 00:04:07.959 "enable_zerocopy_send_server": true, 00:04:07.959 "enable_zerocopy_send_client": false, 00:04:07.959 "zerocopy_threshold": 0, 00:04:07.959 "tls_version": 0, 00:04:07.959 "enable_ktls": false 00:04:07.959 } 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "vmd", 00:04:07.959 "config": [] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "accel", 00:04:07.959 "config": [ 00:04:07.959 { 00:04:07.959 "method": "accel_set_options", 00:04:07.959 "params": { 00:04:07.959 "small_cache_size": 128, 00:04:07.959 "large_cache_size": 16, 00:04:07.959 "task_count": 2048, 00:04:07.959 "sequence_count": 2048, 00:04:07.959 "buf_count": 2048 00:04:07.959 } 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "bdev", 00:04:07.959 "config": [ 00:04:07.959 { 00:04:07.959 "method": "bdev_set_options", 00:04:07.959 "params": { 00:04:07.959 "bdev_io_pool_size": 65535, 00:04:07.959 "bdev_io_cache_size": 256, 00:04:07.959 "bdev_auto_examine": true, 00:04:07.959 "iobuf_small_cache_size": 128, 00:04:07.959 "iobuf_large_cache_size": 16 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "bdev_raid_set_options", 00:04:07.959 "params": { 00:04:07.959 "process_window_size_kb": 1024, 00:04:07.959 "process_max_bandwidth_mb_sec": 0 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "bdev_iscsi_set_options", 00:04:07.959 "params": { 00:04:07.959 "timeout_sec": 30 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "bdev_nvme_set_options", 00:04:07.959 "params": { 00:04:07.959 "action_on_timeout": "none", 00:04:07.959 "timeout_us": 0, 00:04:07.959 "timeout_admin_us": 0, 00:04:07.959 "keep_alive_timeout_ms": 10000, 00:04:07.959 "arbitration_burst": 0, 00:04:07.959 "low_priority_weight": 0, 00:04:07.959 "medium_priority_weight": 0, 00:04:07.959 "high_priority_weight": 0, 00:04:07.959 "nvme_adminq_poll_period_us": 10000, 00:04:07.959 "nvme_ioq_poll_period_us": 0, 00:04:07.959 "io_queue_requests": 0, 00:04:07.959 "delay_cmd_submit": true, 00:04:07.959 "transport_retry_count": 4, 00:04:07.959 "bdev_retry_count": 3, 00:04:07.959 "transport_ack_timeout": 0, 00:04:07.959 "ctrlr_loss_timeout_sec": 0, 00:04:07.959 "reconnect_delay_sec": 0, 00:04:07.959 "fast_io_fail_timeout_sec": 0, 00:04:07.959 "disable_auto_failback": false, 00:04:07.959 "generate_uuids": false, 00:04:07.959 "transport_tos": 0, 00:04:07.959 "nvme_error_stat": false, 00:04:07.959 "rdma_srq_size": 0, 00:04:07.959 "io_path_stat": false, 00:04:07.959 "allow_accel_sequence": false, 00:04:07.959 "rdma_max_cq_size": 0, 00:04:07.959 "rdma_cm_event_timeout_ms": 0, 00:04:07.959 "dhchap_digests": [ 00:04:07.959 "sha256", 00:04:07.959 "sha384", 00:04:07.959 "sha512" 00:04:07.959 ], 00:04:07.959 "dhchap_dhgroups": [ 00:04:07.959 "null", 00:04:07.959 "ffdhe2048", 00:04:07.959 "ffdhe3072", 00:04:07.959 "ffdhe4096", 00:04:07.959 "ffdhe6144", 00:04:07.959 "ffdhe8192" 00:04:07.959 ] 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "bdev_nvme_set_hotplug", 00:04:07.959 "params": { 00:04:07.959 "period_us": 100000, 00:04:07.959 "enable": false 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "bdev_wait_for_examine" 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "scsi", 00:04:07.959 "config": null 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "scheduler", 00:04:07.959 "config": [ 00:04:07.959 { 00:04:07.959 "method": "framework_set_scheduler", 00:04:07.959 "params": { 00:04:07.959 "name": "static" 00:04:07.959 } 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "vhost_scsi", 00:04:07.959 "config": [] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "vhost_blk", 00:04:07.959 "config": [] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "ublk", 00:04:07.959 "config": [] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "nbd", 00:04:07.959 "config": [] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "nvmf", 00:04:07.959 "config": [ 00:04:07.959 { 00:04:07.959 "method": "nvmf_set_config", 00:04:07.959 "params": { 00:04:07.959 "discovery_filter": "match_any", 00:04:07.959 "admin_cmd_passthru": { 00:04:07.959 "identify_ctrlr": false 00:04:07.959 }, 00:04:07.959 "dhchap_digests": [ 00:04:07.959 "sha256", 00:04:07.959 "sha384", 00:04:07.959 "sha512" 00:04:07.959 ], 00:04:07.959 "dhchap_dhgroups": [ 00:04:07.959 "null", 00:04:07.959 "ffdhe2048", 00:04:07.959 "ffdhe3072", 00:04:07.959 "ffdhe4096", 00:04:07.959 "ffdhe6144", 00:04:07.959 "ffdhe8192" 00:04:07.959 ] 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "nvmf_set_max_subsystems", 00:04:07.959 "params": { 00:04:07.959 "max_subsystems": 1024 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "nvmf_set_crdt", 00:04:07.959 "params": { 00:04:07.959 "crdt1": 0, 00:04:07.959 "crdt2": 0, 00:04:07.959 "crdt3": 0 00:04:07.959 } 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "method": "nvmf_create_transport", 00:04:07.959 "params": { 00:04:07.959 "trtype": "TCP", 00:04:07.959 "max_queue_depth": 128, 00:04:07.959 "max_io_qpairs_per_ctrlr": 127, 00:04:07.959 "in_capsule_data_size": 4096, 00:04:07.959 "max_io_size": 131072, 00:04:07.959 "io_unit_size": 131072, 00:04:07.959 "max_aq_depth": 128, 00:04:07.959 "num_shared_buffers": 511, 00:04:07.959 "buf_cache_size": 4294967295, 00:04:07.959 "dif_insert_or_strip": false, 00:04:07.959 "zcopy": false, 00:04:07.959 "c2h_success": true, 00:04:07.959 "sock_priority": 0, 00:04:07.959 "abort_timeout_sec": 1, 00:04:07.959 "ack_timeout": 0, 00:04:07.959 "data_wr_pool_size": 0 00:04:07.959 } 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 }, 00:04:07.959 { 00:04:07.959 "subsystem": "iscsi", 00:04:07.959 "config": [ 00:04:07.959 { 00:04:07.959 "method": "iscsi_set_options", 00:04:07.959 "params": { 00:04:07.959 "node_base": "iqn.2016-06.io.spdk", 00:04:07.959 "max_sessions": 128, 00:04:07.959 "max_connections_per_session": 2, 00:04:07.959 "max_queue_depth": 64, 00:04:07.959 "default_time2wait": 2, 00:04:07.959 "default_time2retain": 20, 00:04:07.959 "first_burst_length": 8192, 00:04:07.959 "immediate_data": true, 00:04:07.959 "allow_duplicated_isid": false, 00:04:07.959 "error_recovery_level": 0, 00:04:07.959 "nop_timeout": 60, 00:04:07.959 "nop_in_interval": 30, 00:04:07.959 "disable_chap": false, 00:04:07.959 "require_chap": false, 00:04:07.959 "mutual_chap": false, 00:04:07.959 "chap_group": 0, 00:04:07.959 "max_large_datain_per_connection": 64, 00:04:07.959 "max_r2t_per_connection": 4, 00:04:07.959 "pdu_pool_size": 36864, 00:04:07.959 "immediate_data_pool_size": 16384, 00:04:07.959 "data_out_pool_size": 2048 00:04:07.959 } 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 } 00:04:07.959 ] 00:04:07.959 } 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3250507 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3250507 ']' 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3250507 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3250507 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3250507' 00:04:07.959 killing process with pid 3250507 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3250507 00:04:07.959 21:52:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3250507 00:04:08.220 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3250853 00:04:08.220 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:08.220 21:52:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3250853 ']' 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3250853' 00:04:13.512 killing process with pid 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3250853 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.512 00:04:13.512 real 0m6.563s 00:04:13.512 user 0m6.446s 00:04:13.512 sys 0m0.583s 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.512 21:52:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.512 ************************************ 00:04:13.512 END TEST skip_rpc_with_json 00:04:13.512 ************************************ 00:04:13.513 21:52:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:13.513 21:52:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.513 21:52:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.513 21:52:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.513 ************************************ 00:04:13.513 START TEST skip_rpc_with_delay 00:04:13.513 ************************************ 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.513 21:52:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.513 [2024-10-12 21:52:31.999621] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:13.513 [2024-10-12 21:52:31.999692] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:13.773 00:04:13.773 real 0m0.069s 00:04:13.773 user 0m0.046s 00:04:13.773 sys 0m0.023s 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.773 21:52:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 ************************************ 00:04:13.773 END TEST skip_rpc_with_delay 00:04:13.773 ************************************ 00:04:13.773 21:52:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:13.773 21:52:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:13.773 21:52:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:13.773 21:52:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.773 21:52:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.773 21:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 ************************************ 00:04:13.773 START TEST exit_on_failed_rpc_init 00:04:13.773 ************************************ 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3251919 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3251919 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3251919 ']' 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.773 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.773 [2024-10-12 21:52:32.156451] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:13.773 [2024-10-12 21:52:32.156512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251919 ] 00:04:13.773 [2024-10-12 21:52:32.235616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.033 [2024-10-12 21:52:32.269366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.604 21:52:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.604 [2024-10-12 21:52:33.005646] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:14.604 [2024-10-12 21:52:33.005702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252136 ] 00:04:14.604 [2024-10-12 21:52:33.081188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.864 [2024-10-12 21:52:33.111928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.864 [2024-10-12 21:52:33.111986] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.864 [2024-10-12 21:52:33.111996] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.864 [2024-10-12 21:52:33.112002] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3251919 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3251919 ']' 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3251919 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3251919 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3251919' 00:04:14.864 killing process with pid 3251919 00:04:14.864 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3251919 00:04:14.865 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3251919 00:04:15.126 00:04:15.126 real 0m1.315s 00:04:15.126 user 0m1.495s 00:04:15.126 sys 0m0.413s 00:04:15.126 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.126 21:52:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.126 ************************************ 00:04:15.126 END TEST exit_on_failed_rpc_init 00:04:15.126 ************************************ 00:04:15.126 21:52:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.126 00:04:15.126 real 0m13.744s 00:04:15.126 user 0m13.227s 00:04:15.126 sys 0m1.643s 00:04:15.126 21:52:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.126 21:52:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.126 ************************************ 00:04:15.126 END TEST skip_rpc 00:04:15.126 ************************************ 00:04:15.126 21:52:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.126 21:52:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.126 21:52:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.126 21:52:33 -- common/autotest_common.sh@10 -- # set +x 00:04:15.126 ************************************ 00:04:15.126 START TEST rpc_client 00:04:15.126 ************************************ 00:04:15.126 21:52:33 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.387 * Looking for test storage... 00:04:15.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.387 21:52:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.387 --rc genhtml_branch_coverage=1 00:04:15.387 --rc genhtml_function_coverage=1 00:04:15.387 --rc genhtml_legend=1 00:04:15.387 --rc geninfo_all_blocks=1 00:04:15.387 --rc geninfo_unexecuted_blocks=1 00:04:15.387 00:04:15.387 ' 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.387 --rc genhtml_branch_coverage=1 00:04:15.387 --rc genhtml_function_coverage=1 00:04:15.387 --rc genhtml_legend=1 00:04:15.387 --rc geninfo_all_blocks=1 00:04:15.387 --rc geninfo_unexecuted_blocks=1 00:04:15.387 00:04:15.387 ' 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.387 --rc genhtml_branch_coverage=1 00:04:15.387 --rc genhtml_function_coverage=1 00:04:15.387 --rc genhtml_legend=1 00:04:15.387 --rc geninfo_all_blocks=1 00:04:15.387 --rc geninfo_unexecuted_blocks=1 00:04:15.387 00:04:15.387 ' 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.387 --rc genhtml_branch_coverage=1 00:04:15.387 --rc genhtml_function_coverage=1 00:04:15.387 --rc genhtml_legend=1 00:04:15.387 --rc geninfo_all_blocks=1 00:04:15.387 --rc geninfo_unexecuted_blocks=1 00:04:15.387 00:04:15.387 ' 00:04:15.387 21:52:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:15.387 OK 00:04:15.387 21:52:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:15.387 00:04:15.387 real 0m0.218s 00:04:15.387 user 0m0.126s 00:04:15.387 sys 0m0.107s 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.387 21:52:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 ************************************ 00:04:15.387 END TEST rpc_client 00:04:15.387 ************************************ 00:04:15.387 21:52:33 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.387 21:52:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.387 21:52:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.387 21:52:33 -- common/autotest_common.sh@10 -- # set +x 00:04:15.387 ************************************ 00:04:15.387 START TEST json_config 00:04:15.387 ************************************ 00:04:15.387 21:52:33 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.649 21:52:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.649 21:52:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.649 21:52:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.649 21:52:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.649 21:52:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.649 21:52:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:15.649 21:52:33 json_config -- scripts/common.sh@345 -- # : 1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.649 21:52:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.649 21:52:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@353 -- # local d=1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.649 21:52:33 json_config -- scripts/common.sh@355 -- # echo 1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.649 21:52:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@353 -- # local d=2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.649 21:52:33 json_config -- scripts/common.sh@355 -- # echo 2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.649 21:52:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.649 21:52:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.649 21:52:33 json_config -- scripts/common.sh@368 -- # return 0 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.649 --rc genhtml_branch_coverage=1 00:04:15.649 --rc genhtml_function_coverage=1 00:04:15.649 --rc genhtml_legend=1 00:04:15.649 --rc geninfo_all_blocks=1 00:04:15.649 --rc geninfo_unexecuted_blocks=1 00:04:15.649 00:04:15.649 ' 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.649 --rc genhtml_branch_coverage=1 00:04:15.649 --rc genhtml_function_coverage=1 00:04:15.649 --rc genhtml_legend=1 00:04:15.649 --rc geninfo_all_blocks=1 00:04:15.649 --rc geninfo_unexecuted_blocks=1 00:04:15.649 00:04:15.649 ' 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.649 --rc genhtml_branch_coverage=1 00:04:15.649 --rc genhtml_function_coverage=1 00:04:15.649 --rc genhtml_legend=1 00:04:15.649 --rc geninfo_all_blocks=1 00:04:15.649 --rc geninfo_unexecuted_blocks=1 00:04:15.649 00:04:15.649 ' 00:04:15.649 21:52:33 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:15.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.649 --rc genhtml_branch_coverage=1 00:04:15.649 --rc genhtml_function_coverage=1 00:04:15.649 --rc genhtml_legend=1 00:04:15.649 --rc geninfo_all_blocks=1 00:04:15.649 --rc geninfo_unexecuted_blocks=1 00:04:15.649 00:04:15.649 ' 00:04:15.649 21:52:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:15.649 21:52:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:15.649 21:52:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.649 21:52:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.649 21:52:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.649 21:52:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:15.649 21:52:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:15.649 21:52:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.649 21:52:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.649 21:52:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.649 21:52:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.649 21:52:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.649 21:52:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.649 21:52:34 json_config -- paths/export.sh@5 -- # export PATH 00:04:15.649 21:52:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@51 -- # : 0 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:15.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:15.649 21:52:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:15.649 21:52:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:15.649 21:52:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:15.649 21:52:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:15.650 INFO: JSON configuration test init 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.650 21:52:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:15.650 21:52:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:15.650 21:52:34 json_config -- json_config/common.sh@10 -- # shift 00:04:15.650 21:52:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:15.650 21:52:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:15.650 21:52:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:15.650 21:52:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.650 21:52:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.650 21:52:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3252394 00:04:15.650 21:52:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:15.650 Waiting for target to run... 00:04:15.650 21:52:34 json_config -- json_config/common.sh@25 -- # waitforlisten 3252394 /var/tmp/spdk_tgt.sock 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 3252394 ']' 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.650 21:52:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:15.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.650 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.650 [2024-10-12 21:52:34.104278] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:15.650 [2024-10-12 21:52:34.104348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252394 ] 00:04:16.221 [2024-10-12 21:52:34.467553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.221 [2024-10-12 21:52:34.494158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:16.482 21:52:34 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.482 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.482 21:52:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:16.482 21:52:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:16.482 21:52:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:17.053 21:52:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.053 21:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:17.053 21:52:35 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:17.053 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@54 -- # sort 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:17.314 21:52:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:17.314 21:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:17.314 21:52:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:17.314 21:52:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:17.314 21:52:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.314 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.576 MallocForNvmf0 00:04:17.576 21:52:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.576 21:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.576 MallocForNvmf1 00:04:17.838 21:52:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.838 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.838 [2024-10-12 21:52:36.241522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.838 21:52:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.838 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.099 21:52:36 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.099 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.358 21:52:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.358 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.358 21:52:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.358 21:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.618 [2024-10-12 21:52:36.955630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.618 21:52:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:18.618 21:52:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.618 21:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.618 21:52:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:18.618 21:52:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.618 21:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.618 21:52:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:18.618 21:52:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.618 21:52:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.879 MallocBdevForConfigChangeCheck 00:04:18.879 21:52:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:18.879 21:52:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.879 21:52:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.879 21:52:37 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:18.879 21:52:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.139 21:52:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:19.139 INFO: shutting down applications... 00:04:19.139 21:52:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:19.139 21:52:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:19.139 21:52:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:19.139 21:52:37 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:19.709 Calling clear_iscsi_subsystem 00:04:19.709 Calling clear_nvmf_subsystem 00:04:19.709 Calling clear_nbd_subsystem 00:04:19.709 Calling clear_ublk_subsystem 00:04:19.709 Calling clear_vhost_blk_subsystem 00:04:19.709 Calling clear_vhost_scsi_subsystem 00:04:19.709 Calling clear_bdev_subsystem 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:19.709 21:52:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.968 21:52:38 json_config -- json_config/json_config.sh@352 -- # break 00:04:19.968 21:52:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:19.968 21:52:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:19.969 21:52:38 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.969 21:52:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.969 21:52:38 json_config -- json_config/common.sh@35 -- # [[ -n 3252394 ]] 00:04:19.969 21:52:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3252394 00:04:19.969 21:52:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.969 21:52:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.969 21:52:38 json_config -- json_config/common.sh@41 -- # kill -0 3252394 00:04:19.969 21:52:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:20.540 21:52:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:20.540 21:52:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:20.540 21:52:38 json_config -- json_config/common.sh@41 -- # kill -0 3252394 00:04:20.540 21:52:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:20.540 21:52:38 json_config -- json_config/common.sh@43 -- # break 00:04:20.540 21:52:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:20.540 21:52:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:20.540 SPDK target shutdown done 00:04:20.540 21:52:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:20.540 INFO: relaunching applications... 00:04:20.540 21:52:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.540 21:52:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:20.540 21:52:38 json_config -- json_config/common.sh@10 -- # shift 00:04:20.540 21:52:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:20.540 21:52:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:20.540 21:52:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:20.540 21:52:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.540 21:52:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:20.540 21:52:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3253531 00:04:20.540 21:52:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.540 21:52:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:20.540 Waiting for target to run... 00:04:20.540 21:52:38 json_config -- json_config/common.sh@25 -- # waitforlisten 3253531 /var/tmp/spdk_tgt.sock 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 3253531 ']' 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:20.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.540 21:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.540 [2024-10-12 21:52:38.998609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:20.540 [2024-10-12 21:52:38.998669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253531 ] 00:04:21.111 [2024-10-12 21:52:39.303418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.111 [2024-10-12 21:52:39.321979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.372 [2024-10-12 21:52:39.795690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.372 [2024-10-12 21:52:39.828026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:21.633 21:52:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.633 21:52:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:21.633 21:52:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.633 00:04:21.633 21:52:39 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:21.633 21:52:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:21.633 INFO: Checking if target configuration is the same... 00:04:21.633 21:52:39 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.633 21:52:39 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:21.633 21:52:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.633 + '[' 2 -ne 2 ']' 00:04:21.633 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:21.633 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:21.633 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.633 +++ basename /dev/fd/62 00:04:21.633 ++ mktemp /tmp/62.XXX 00:04:21.633 + tmp_file_1=/tmp/62.uFY 00:04:21.633 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:21.633 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.633 + tmp_file_2=/tmp/spdk_tgt_config.json.JaN 00:04:21.633 + ret=0 00:04:21.633 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.893 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:21.893 + diff -u /tmp/62.uFY /tmp/spdk_tgt_config.json.JaN 00:04:21.893 + echo 'INFO: JSON config files are the same' 00:04:21.893 INFO: JSON config files are the same 00:04:21.893 + rm /tmp/62.uFY /tmp/spdk_tgt_config.json.JaN 00:04:21.893 + exit 0 00:04:21.893 21:52:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:21.893 21:52:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:21.893 INFO: changing configuration and checking if this can be detected... 00:04:21.893 21:52:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.893 21:52:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:22.153 21:52:40 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.153 21:52:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:22.153 21:52:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.153 + '[' 2 -ne 2 ']' 00:04:22.153 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:22.153 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:22.153 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:22.153 +++ basename /dev/fd/62 00:04:22.153 ++ mktemp /tmp/62.XXX 00:04:22.153 + tmp_file_1=/tmp/62.AIH 00:04:22.153 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.153 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:22.153 + tmp_file_2=/tmp/spdk_tgt_config.json.sM3 00:04:22.153 + ret=0 00:04:22.153 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.413 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:22.413 + diff -u /tmp/62.AIH /tmp/spdk_tgt_config.json.sM3 00:04:22.413 + ret=1 00:04:22.413 + echo '=== Start of file: /tmp/62.AIH ===' 00:04:22.413 + cat /tmp/62.AIH 00:04:22.413 + echo '=== End of file: /tmp/62.AIH ===' 00:04:22.413 + echo '' 00:04:22.413 + echo '=== Start of file: /tmp/spdk_tgt_config.json.sM3 ===' 00:04:22.413 + cat /tmp/spdk_tgt_config.json.sM3 00:04:22.413 + echo '=== End of file: /tmp/spdk_tgt_config.json.sM3 ===' 00:04:22.413 + echo '' 00:04:22.413 + rm /tmp/62.AIH /tmp/spdk_tgt_config.json.sM3 00:04:22.413 + exit 1 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:22.413 INFO: configuration change detected. 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@324 -- # [[ -n 3253531 ]] 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 21:52:40 json_config -- json_config/json_config.sh@330 -- # killprocess 3253531 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@950 -- # '[' -z 3253531 ']' 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@954 -- # kill -0 3253531 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@955 -- # uname 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.413 21:52:40 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3253531 00:04:22.674 21:52:40 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.674 21:52:40 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.674 21:52:40 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3253531' 00:04:22.674 killing process with pid 3253531 00:04:22.674 21:52:40 json_config -- common/autotest_common.sh@969 -- # kill 3253531 00:04:22.674 21:52:40 json_config -- common/autotest_common.sh@974 -- # wait 3253531 00:04:22.934 21:52:41 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.934 21:52:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:22.934 21:52:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.934 21:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.934 21:52:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:22.934 21:52:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:22.934 INFO: Success 00:04:22.934 00:04:22.934 real 0m7.442s 00:04:22.934 user 0m9.170s 00:04:22.934 sys 0m1.872s 00:04:22.934 21:52:41 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.935 21:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.935 ************************************ 00:04:22.935 END TEST json_config 00:04:22.935 ************************************ 00:04:22.935 21:52:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.935 21:52:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.935 21:52:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.935 21:52:41 -- common/autotest_common.sh@10 -- # set +x 00:04:22.935 ************************************ 00:04:22.935 START TEST json_config_extra_key 00:04:22.935 ************************************ 00:04:22.935 21:52:41 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.935 21:52:41 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:22.935 21:52:41 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:22.935 21:52:41 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.196 --rc genhtml_branch_coverage=1 00:04:23.196 --rc genhtml_function_coverage=1 00:04:23.196 --rc genhtml_legend=1 00:04:23.196 --rc geninfo_all_blocks=1 00:04:23.196 --rc geninfo_unexecuted_blocks=1 00:04:23.196 00:04:23.196 ' 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.196 --rc genhtml_branch_coverage=1 00:04:23.196 --rc genhtml_function_coverage=1 00:04:23.196 --rc genhtml_legend=1 00:04:23.196 --rc geninfo_all_blocks=1 00:04:23.196 --rc geninfo_unexecuted_blocks=1 00:04:23.196 00:04:23.196 ' 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.196 --rc genhtml_branch_coverage=1 00:04:23.196 --rc genhtml_function_coverage=1 00:04:23.196 --rc genhtml_legend=1 00:04:23.196 --rc geninfo_all_blocks=1 00:04:23.196 --rc geninfo_unexecuted_blocks=1 00:04:23.196 00:04:23.196 ' 00:04:23.196 21:52:41 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.196 --rc genhtml_branch_coverage=1 00:04:23.196 --rc genhtml_function_coverage=1 00:04:23.196 --rc genhtml_legend=1 00:04:23.196 --rc geninfo_all_blocks=1 00:04:23.196 --rc geninfo_unexecuted_blocks=1 00:04:23.196 00:04:23.196 ' 00:04:23.196 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.196 21:52:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:23.196 21:52:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:23.197 21:52:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.197 21:52:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.197 21:52:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.197 21:52:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.197 21:52:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.197 21:52:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.197 21:52:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:23.197 21:52:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:23.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:23.197 21:52:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:23.197 INFO: launching applications... 00:04:23.197 21:52:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3254314 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.197 Waiting for target to run... 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3254314 /var/tmp/spdk_tgt.sock 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3254314 ']' 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.197 21:52:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.197 21:52:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.197 [2024-10-12 21:52:41.616791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:23.197 [2024-10-12 21:52:41.616856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254314 ] 00:04:23.458 [2024-10-12 21:52:41.895091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.458 [2024-10-12 21:52:41.913136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.027 21:52:42 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.027 21:52:42 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:24.027 00:04:24.027 21:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:24.027 INFO: shutting down applications... 00:04:24.027 21:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3254314 ]] 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3254314 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3254314 00:04:24.027 21:52:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3254314 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.599 21:52:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.599 SPDK target shutdown done 00:04:24.599 21:52:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.599 Success 00:04:24.599 00:04:24.599 real 0m1.572s 00:04:24.599 user 0m1.196s 00:04:24.599 sys 0m0.402s 00:04:24.599 21:52:42 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.599 21:52:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:24.599 ************************************ 00:04:24.599 END TEST json_config_extra_key 00:04:24.599 ************************************ 00:04:24.599 21:52:42 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.599 21:52:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.599 21:52:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.599 21:52:42 -- common/autotest_common.sh@10 -- # set +x 00:04:24.599 ************************************ 00:04:24.599 START TEST alias_rpc 00:04:24.599 ************************************ 00:04:24.599 21:52:42 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.599 * Looking for test storage... 00:04:24.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:24.860 21:52:43 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:24.860 21:52:43 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:24.860 21:52:43 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:24.860 21:52:43 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.860 21:52:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.861 21:52:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:24.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.861 --rc genhtml_branch_coverage=1 00:04:24.861 --rc genhtml_function_coverage=1 00:04:24.861 --rc genhtml_legend=1 00:04:24.861 --rc geninfo_all_blocks=1 00:04:24.861 --rc geninfo_unexecuted_blocks=1 00:04:24.861 00:04:24.861 ' 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:24.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.861 --rc genhtml_branch_coverage=1 00:04:24.861 --rc genhtml_function_coverage=1 00:04:24.861 --rc genhtml_legend=1 00:04:24.861 --rc geninfo_all_blocks=1 00:04:24.861 --rc geninfo_unexecuted_blocks=1 00:04:24.861 00:04:24.861 ' 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:24.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.861 --rc genhtml_branch_coverage=1 00:04:24.861 --rc genhtml_function_coverage=1 00:04:24.861 --rc genhtml_legend=1 00:04:24.861 --rc geninfo_all_blocks=1 00:04:24.861 --rc geninfo_unexecuted_blocks=1 00:04:24.861 00:04:24.861 ' 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:24.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.861 --rc genhtml_branch_coverage=1 00:04:24.861 --rc genhtml_function_coverage=1 00:04:24.861 --rc genhtml_legend=1 00:04:24.861 --rc geninfo_all_blocks=1 00:04:24.861 --rc geninfo_unexecuted_blocks=1 00:04:24.861 00:04:24.861 ' 00:04:24.861 21:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.861 21:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3254703 00:04:24.861 21:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3254703 00:04:24.861 21:52:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3254703 ']' 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.861 21:52:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.861 [2024-10-12 21:52:43.251318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:24.861 [2024-10-12 21:52:43.251393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254703 ] 00:04:24.861 [2024-10-12 21:52:43.333291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.121 [2024-10-12 21:52:43.366541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.691 21:52:44 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.691 21:52:44 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:25.691 21:52:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:25.952 21:52:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3254703 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3254703 ']' 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3254703 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3254703 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3254703' 00:04:25.952 killing process with pid 3254703 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@969 -- # kill 3254703 00:04:25.952 21:52:44 alias_rpc -- common/autotest_common.sh@974 -- # wait 3254703 00:04:26.213 00:04:26.213 real 0m1.484s 00:04:26.213 user 0m1.591s 00:04:26.213 sys 0m0.435s 00:04:26.213 21:52:44 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.213 21:52:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.213 ************************************ 00:04:26.213 END TEST alias_rpc 00:04:26.213 ************************************ 00:04:26.213 21:52:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:26.213 21:52:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.213 21:52:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.213 21:52:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.213 21:52:44 -- common/autotest_common.sh@10 -- # set +x 00:04:26.213 ************************************ 00:04:26.213 START TEST spdkcli_tcp 00:04:26.213 ************************************ 00:04:26.213 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:26.213 * Looking for test storage... 00:04:26.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:26.213 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.213 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.213 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.474 21:52:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.474 --rc genhtml_branch_coverage=1 00:04:26.474 --rc genhtml_function_coverage=1 00:04:26.474 --rc genhtml_legend=1 00:04:26.474 --rc geninfo_all_blocks=1 00:04:26.474 --rc geninfo_unexecuted_blocks=1 00:04:26.474 00:04:26.474 ' 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.474 --rc genhtml_branch_coverage=1 00:04:26.474 --rc genhtml_function_coverage=1 00:04:26.474 --rc genhtml_legend=1 00:04:26.474 --rc geninfo_all_blocks=1 00:04:26.474 --rc geninfo_unexecuted_blocks=1 00:04:26.474 00:04:26.474 ' 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.474 --rc genhtml_branch_coverage=1 00:04:26.474 --rc genhtml_function_coverage=1 00:04:26.474 --rc genhtml_legend=1 00:04:26.474 --rc geninfo_all_blocks=1 00:04:26.474 --rc geninfo_unexecuted_blocks=1 00:04:26.474 00:04:26.474 ' 00:04:26.474 21:52:44 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.474 --rc genhtml_branch_coverage=1 00:04:26.474 --rc genhtml_function_coverage=1 00:04:26.474 --rc genhtml_legend=1 00:04:26.474 --rc geninfo_all_blocks=1 00:04:26.474 --rc geninfo_unexecuted_blocks=1 00:04:26.474 00:04:26.474 ' 00:04:26.474 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:26.474 21:52:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3255037 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3255037 00:04:26.475 21:52:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3255037 ']' 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.475 21:52:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.475 [2024-10-12 21:52:44.822955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:26.475 [2024-10-12 21:52:44.823027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255037 ] 00:04:26.475 [2024-10-12 21:52:44.904411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.475 [2024-10-12 21:52:44.938173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.475 [2024-10-12 21:52:44.938329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.416 21:52:45 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.416 21:52:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:27.416 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3255121 00:04:27.416 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:27.416 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:27.416 [ 00:04:27.416 "bdev_malloc_delete", 00:04:27.416 "bdev_malloc_create", 00:04:27.416 "bdev_null_resize", 00:04:27.416 "bdev_null_delete", 00:04:27.416 "bdev_null_create", 00:04:27.416 "bdev_nvme_cuse_unregister", 00:04:27.416 "bdev_nvme_cuse_register", 00:04:27.416 "bdev_opal_new_user", 00:04:27.416 "bdev_opal_set_lock_state", 00:04:27.416 "bdev_opal_delete", 00:04:27.416 "bdev_opal_get_info", 00:04:27.416 "bdev_opal_create", 00:04:27.416 "bdev_nvme_opal_revert", 00:04:27.416 "bdev_nvme_opal_init", 00:04:27.416 "bdev_nvme_send_cmd", 00:04:27.416 "bdev_nvme_set_keys", 00:04:27.416 "bdev_nvme_get_path_iostat", 00:04:27.416 "bdev_nvme_get_mdns_discovery_info", 00:04:27.416 "bdev_nvme_stop_mdns_discovery", 00:04:27.416 "bdev_nvme_start_mdns_discovery", 00:04:27.416 "bdev_nvme_set_multipath_policy", 00:04:27.416 "bdev_nvme_set_preferred_path", 00:04:27.416 "bdev_nvme_get_io_paths", 00:04:27.416 "bdev_nvme_remove_error_injection", 00:04:27.416 "bdev_nvme_add_error_injection", 00:04:27.416 "bdev_nvme_get_discovery_info", 00:04:27.416 "bdev_nvme_stop_discovery", 00:04:27.416 "bdev_nvme_start_discovery", 00:04:27.416 "bdev_nvme_get_controller_health_info", 00:04:27.416 "bdev_nvme_disable_controller", 00:04:27.416 "bdev_nvme_enable_controller", 00:04:27.416 "bdev_nvme_reset_controller", 00:04:27.416 "bdev_nvme_get_transport_statistics", 00:04:27.416 "bdev_nvme_apply_firmware", 00:04:27.416 "bdev_nvme_detach_controller", 00:04:27.417 "bdev_nvme_get_controllers", 00:04:27.417 "bdev_nvme_attach_controller", 00:04:27.417 "bdev_nvme_set_hotplug", 00:04:27.417 "bdev_nvme_set_options", 00:04:27.417 "bdev_passthru_delete", 00:04:27.417 "bdev_passthru_create", 00:04:27.417 "bdev_lvol_set_parent_bdev", 00:04:27.417 "bdev_lvol_set_parent", 00:04:27.417 "bdev_lvol_check_shallow_copy", 00:04:27.417 "bdev_lvol_start_shallow_copy", 00:04:27.417 "bdev_lvol_grow_lvstore", 00:04:27.417 "bdev_lvol_get_lvols", 00:04:27.417 "bdev_lvol_get_lvstores", 00:04:27.417 "bdev_lvol_delete", 00:04:27.417 "bdev_lvol_set_read_only", 00:04:27.417 "bdev_lvol_resize", 00:04:27.417 "bdev_lvol_decouple_parent", 00:04:27.417 "bdev_lvol_inflate", 00:04:27.417 "bdev_lvol_rename", 00:04:27.417 "bdev_lvol_clone_bdev", 00:04:27.417 "bdev_lvol_clone", 00:04:27.417 "bdev_lvol_snapshot", 00:04:27.417 "bdev_lvol_create", 00:04:27.417 "bdev_lvol_delete_lvstore", 00:04:27.417 "bdev_lvol_rename_lvstore", 00:04:27.417 "bdev_lvol_create_lvstore", 00:04:27.417 "bdev_raid_set_options", 00:04:27.417 "bdev_raid_remove_base_bdev", 00:04:27.417 "bdev_raid_add_base_bdev", 00:04:27.417 "bdev_raid_delete", 00:04:27.417 "bdev_raid_create", 00:04:27.417 "bdev_raid_get_bdevs", 00:04:27.417 "bdev_error_inject_error", 00:04:27.417 "bdev_error_delete", 00:04:27.417 "bdev_error_create", 00:04:27.417 "bdev_split_delete", 00:04:27.417 "bdev_split_create", 00:04:27.417 "bdev_delay_delete", 00:04:27.417 "bdev_delay_create", 00:04:27.417 "bdev_delay_update_latency", 00:04:27.417 "bdev_zone_block_delete", 00:04:27.417 "bdev_zone_block_create", 00:04:27.417 "blobfs_create", 00:04:27.417 "blobfs_detect", 00:04:27.417 "blobfs_set_cache_size", 00:04:27.417 "bdev_aio_delete", 00:04:27.417 "bdev_aio_rescan", 00:04:27.417 "bdev_aio_create", 00:04:27.417 "bdev_ftl_set_property", 00:04:27.417 "bdev_ftl_get_properties", 00:04:27.417 "bdev_ftl_get_stats", 00:04:27.417 "bdev_ftl_unmap", 00:04:27.417 "bdev_ftl_unload", 00:04:27.417 "bdev_ftl_delete", 00:04:27.417 "bdev_ftl_load", 00:04:27.417 "bdev_ftl_create", 00:04:27.417 "bdev_virtio_attach_controller", 00:04:27.417 "bdev_virtio_scsi_get_devices", 00:04:27.417 "bdev_virtio_detach_controller", 00:04:27.417 "bdev_virtio_blk_set_hotplug", 00:04:27.417 "bdev_iscsi_delete", 00:04:27.417 "bdev_iscsi_create", 00:04:27.417 "bdev_iscsi_set_options", 00:04:27.417 "accel_error_inject_error", 00:04:27.417 "ioat_scan_accel_module", 00:04:27.417 "dsa_scan_accel_module", 00:04:27.417 "iaa_scan_accel_module", 00:04:27.417 "vfu_virtio_create_fs_endpoint", 00:04:27.417 "vfu_virtio_create_scsi_endpoint", 00:04:27.417 "vfu_virtio_scsi_remove_target", 00:04:27.417 "vfu_virtio_scsi_add_target", 00:04:27.417 "vfu_virtio_create_blk_endpoint", 00:04:27.417 "vfu_virtio_delete_endpoint", 00:04:27.417 "keyring_file_remove_key", 00:04:27.417 "keyring_file_add_key", 00:04:27.417 "keyring_linux_set_options", 00:04:27.417 "fsdev_aio_delete", 00:04:27.417 "fsdev_aio_create", 00:04:27.417 "iscsi_get_histogram", 00:04:27.417 "iscsi_enable_histogram", 00:04:27.417 "iscsi_set_options", 00:04:27.417 "iscsi_get_auth_groups", 00:04:27.417 "iscsi_auth_group_remove_secret", 00:04:27.417 "iscsi_auth_group_add_secret", 00:04:27.417 "iscsi_delete_auth_group", 00:04:27.417 "iscsi_create_auth_group", 00:04:27.417 "iscsi_set_discovery_auth", 00:04:27.417 "iscsi_get_options", 00:04:27.417 "iscsi_target_node_request_logout", 00:04:27.417 "iscsi_target_node_set_redirect", 00:04:27.417 "iscsi_target_node_set_auth", 00:04:27.417 "iscsi_target_node_add_lun", 00:04:27.417 "iscsi_get_stats", 00:04:27.417 "iscsi_get_connections", 00:04:27.417 "iscsi_portal_group_set_auth", 00:04:27.417 "iscsi_start_portal_group", 00:04:27.417 "iscsi_delete_portal_group", 00:04:27.417 "iscsi_create_portal_group", 00:04:27.417 "iscsi_get_portal_groups", 00:04:27.417 "iscsi_delete_target_node", 00:04:27.417 "iscsi_target_node_remove_pg_ig_maps", 00:04:27.417 "iscsi_target_node_add_pg_ig_maps", 00:04:27.417 "iscsi_create_target_node", 00:04:27.417 "iscsi_get_target_nodes", 00:04:27.417 "iscsi_delete_initiator_group", 00:04:27.417 "iscsi_initiator_group_remove_initiators", 00:04:27.417 "iscsi_initiator_group_add_initiators", 00:04:27.417 "iscsi_create_initiator_group", 00:04:27.417 "iscsi_get_initiator_groups", 00:04:27.417 "nvmf_set_crdt", 00:04:27.417 "nvmf_set_config", 00:04:27.417 "nvmf_set_max_subsystems", 00:04:27.417 "nvmf_stop_mdns_prr", 00:04:27.417 "nvmf_publish_mdns_prr", 00:04:27.417 "nvmf_subsystem_get_listeners", 00:04:27.417 "nvmf_subsystem_get_qpairs", 00:04:27.417 "nvmf_subsystem_get_controllers", 00:04:27.417 "nvmf_get_stats", 00:04:27.417 "nvmf_get_transports", 00:04:27.417 "nvmf_create_transport", 00:04:27.417 "nvmf_get_targets", 00:04:27.417 "nvmf_delete_target", 00:04:27.417 "nvmf_create_target", 00:04:27.417 "nvmf_subsystem_allow_any_host", 00:04:27.417 "nvmf_subsystem_set_keys", 00:04:27.417 "nvmf_subsystem_remove_host", 00:04:27.417 "nvmf_subsystem_add_host", 00:04:27.417 "nvmf_ns_remove_host", 00:04:27.417 "nvmf_ns_add_host", 00:04:27.417 "nvmf_subsystem_remove_ns", 00:04:27.417 "nvmf_subsystem_set_ns_ana_group", 00:04:27.417 "nvmf_subsystem_add_ns", 00:04:27.417 "nvmf_subsystem_listener_set_ana_state", 00:04:27.417 "nvmf_discovery_get_referrals", 00:04:27.417 "nvmf_discovery_remove_referral", 00:04:27.417 "nvmf_discovery_add_referral", 00:04:27.417 "nvmf_subsystem_remove_listener", 00:04:27.417 "nvmf_subsystem_add_listener", 00:04:27.417 "nvmf_delete_subsystem", 00:04:27.417 "nvmf_create_subsystem", 00:04:27.417 "nvmf_get_subsystems", 00:04:27.417 "env_dpdk_get_mem_stats", 00:04:27.417 "nbd_get_disks", 00:04:27.417 "nbd_stop_disk", 00:04:27.417 "nbd_start_disk", 00:04:27.417 "ublk_recover_disk", 00:04:27.417 "ublk_get_disks", 00:04:27.417 "ublk_stop_disk", 00:04:27.417 "ublk_start_disk", 00:04:27.417 "ublk_destroy_target", 00:04:27.417 "ublk_create_target", 00:04:27.417 "virtio_blk_create_transport", 00:04:27.417 "virtio_blk_get_transports", 00:04:27.417 "vhost_controller_set_coalescing", 00:04:27.417 "vhost_get_controllers", 00:04:27.417 "vhost_delete_controller", 00:04:27.417 "vhost_create_blk_controller", 00:04:27.417 "vhost_scsi_controller_remove_target", 00:04:27.417 "vhost_scsi_controller_add_target", 00:04:27.417 "vhost_start_scsi_controller", 00:04:27.417 "vhost_create_scsi_controller", 00:04:27.417 "thread_set_cpumask", 00:04:27.417 "scheduler_set_options", 00:04:27.417 "framework_get_governor", 00:04:27.417 "framework_get_scheduler", 00:04:27.417 "framework_set_scheduler", 00:04:27.417 "framework_get_reactors", 00:04:27.417 "thread_get_io_channels", 00:04:27.417 "thread_get_pollers", 00:04:27.417 "thread_get_stats", 00:04:27.417 "framework_monitor_context_switch", 00:04:27.417 "spdk_kill_instance", 00:04:27.417 "log_enable_timestamps", 00:04:27.417 "log_get_flags", 00:04:27.417 "log_clear_flag", 00:04:27.417 "log_set_flag", 00:04:27.417 "log_get_level", 00:04:27.417 "log_set_level", 00:04:27.417 "log_get_print_level", 00:04:27.417 "log_set_print_level", 00:04:27.417 "framework_enable_cpumask_locks", 00:04:27.417 "framework_disable_cpumask_locks", 00:04:27.417 "framework_wait_init", 00:04:27.417 "framework_start_init", 00:04:27.417 "scsi_get_devices", 00:04:27.417 "bdev_get_histogram", 00:04:27.417 "bdev_enable_histogram", 00:04:27.417 "bdev_set_qos_limit", 00:04:27.417 "bdev_set_qd_sampling_period", 00:04:27.417 "bdev_get_bdevs", 00:04:27.417 "bdev_reset_iostat", 00:04:27.417 "bdev_get_iostat", 00:04:27.417 "bdev_examine", 00:04:27.417 "bdev_wait_for_examine", 00:04:27.417 "bdev_set_options", 00:04:27.417 "accel_get_stats", 00:04:27.417 "accel_set_options", 00:04:27.417 "accel_set_driver", 00:04:27.417 "accel_crypto_key_destroy", 00:04:27.417 "accel_crypto_keys_get", 00:04:27.417 "accel_crypto_key_create", 00:04:27.417 "accel_assign_opc", 00:04:27.417 "accel_get_module_info", 00:04:27.417 "accel_get_opc_assignments", 00:04:27.417 "vmd_rescan", 00:04:27.417 "vmd_remove_device", 00:04:27.417 "vmd_enable", 00:04:27.417 "sock_get_default_impl", 00:04:27.417 "sock_set_default_impl", 00:04:27.417 "sock_impl_set_options", 00:04:27.417 "sock_impl_get_options", 00:04:27.417 "iobuf_get_stats", 00:04:27.417 "iobuf_set_options", 00:04:27.417 "keyring_get_keys", 00:04:27.417 "vfu_tgt_set_base_path", 00:04:27.417 "framework_get_pci_devices", 00:04:27.417 "framework_get_config", 00:04:27.417 "framework_get_subsystems", 00:04:27.417 "fsdev_set_opts", 00:04:27.417 "fsdev_get_opts", 00:04:27.417 "trace_get_info", 00:04:27.417 "trace_get_tpoint_group_mask", 00:04:27.417 "trace_disable_tpoint_group", 00:04:27.417 "trace_enable_tpoint_group", 00:04:27.417 "trace_clear_tpoint_mask", 00:04:27.417 "trace_set_tpoint_mask", 00:04:27.417 "notify_get_notifications", 00:04:27.417 "notify_get_types", 00:04:27.417 "spdk_get_version", 00:04:27.417 "rpc_get_methods" 00:04:27.417 ] 00:04:27.417 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.417 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:27.417 21:52:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3255037 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3255037 ']' 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3255037 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.417 21:52:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3255037 00:04:27.679 21:52:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.679 21:52:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.679 21:52:45 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3255037' 00:04:27.679 killing process with pid 3255037 00:04:27.679 21:52:45 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3255037 00:04:27.679 21:52:45 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3255037 00:04:27.679 00:04:27.679 real 0m1.549s 00:04:27.679 user 0m2.831s 00:04:27.679 sys 0m0.473s 00:04:27.679 21:52:46 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.679 21:52:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.679 ************************************ 00:04:27.679 END TEST spdkcli_tcp 00:04:27.679 ************************************ 00:04:27.679 21:52:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.679 21:52:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.679 21:52:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.679 21:52:46 -- common/autotest_common.sh@10 -- # set +x 00:04:27.940 ************************************ 00:04:27.940 START TEST dpdk_mem_utility 00:04:27.940 ************************************ 00:04:27.940 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.940 * Looking for test storage... 00:04:27.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.941 21:52:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:27.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.941 --rc genhtml_branch_coverage=1 00:04:27.941 --rc genhtml_function_coverage=1 00:04:27.941 --rc genhtml_legend=1 00:04:27.941 --rc geninfo_all_blocks=1 00:04:27.941 --rc geninfo_unexecuted_blocks=1 00:04:27.941 00:04:27.941 ' 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:27.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.941 --rc genhtml_branch_coverage=1 00:04:27.941 --rc genhtml_function_coverage=1 00:04:27.941 --rc genhtml_legend=1 00:04:27.941 --rc geninfo_all_blocks=1 00:04:27.941 --rc geninfo_unexecuted_blocks=1 00:04:27.941 00:04:27.941 ' 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:27.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.941 --rc genhtml_branch_coverage=1 00:04:27.941 --rc genhtml_function_coverage=1 00:04:27.941 --rc genhtml_legend=1 00:04:27.941 --rc geninfo_all_blocks=1 00:04:27.941 --rc geninfo_unexecuted_blocks=1 00:04:27.941 00:04:27.941 ' 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:27.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.941 --rc genhtml_branch_coverage=1 00:04:27.941 --rc genhtml_function_coverage=1 00:04:27.941 --rc genhtml_legend=1 00:04:27.941 --rc geninfo_all_blocks=1 00:04:27.941 --rc geninfo_unexecuted_blocks=1 00:04:27.941 00:04:27.941 ' 00:04:27.941 21:52:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.941 21:52:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3255400 00:04:27.941 21:52:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3255400 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3255400 ']' 00:04:27.941 21:52:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.941 21:52:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.202 [2024-10-12 21:52:46.443318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:28.202 [2024-10-12 21:52:46.443395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255400 ] 00:04:28.202 [2024-10-12 21:52:46.523701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.202 [2024-10-12 21:52:46.565096] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.773 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.773 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:28.773 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.773 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.773 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.773 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.773 { 00:04:28.773 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.773 } 00:04:28.773 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.773 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:29.033 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:29.033 1 heaps totaling size 860.000000 MiB 00:04:29.033 size: 860.000000 MiB heap id: 0 00:04:29.033 end heaps---------- 00:04:29.033 9 mempools totaling size 642.649841 MiB 00:04:29.033 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:29.033 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:29.033 size: 92.545471 MiB name: bdev_io_3255400 00:04:29.033 size: 51.011292 MiB name: evtpool_3255400 00:04:29.033 size: 50.003479 MiB name: msgpool_3255400 00:04:29.033 size: 36.509338 MiB name: fsdev_io_3255400 00:04:29.033 size: 21.763794 MiB name: PDU_Pool 00:04:29.033 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:29.034 size: 0.026123 MiB name: Session_Pool 00:04:29.034 end mempools------- 00:04:29.034 6 memzones totaling size 4.142822 MiB 00:04:29.034 size: 1.000366 MiB name: RG_ring_0_3255400 00:04:29.034 size: 1.000366 MiB name: RG_ring_1_3255400 00:04:29.034 size: 1.000366 MiB name: RG_ring_4_3255400 00:04:29.034 size: 1.000366 MiB name: RG_ring_5_3255400 00:04:29.034 size: 0.125366 MiB name: RG_ring_2_3255400 00:04:29.034 size: 0.015991 MiB name: RG_ring_3_3255400 00:04:29.034 end memzones------- 00:04:29.034 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:29.034 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:04:29.034 list of free elements. size: 13.984680 MiB 00:04:29.034 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:29.034 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:29.034 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:29.034 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:29.034 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:29.034 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:29.034 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:29.034 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:29.034 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:29.034 element at address: 0x20001d800000 with size: 0.582886 MiB 00:04:29.034 element at address: 0x200003e00000 with size: 0.495605 MiB 00:04:29.034 element at address: 0x20000d800000 with size: 0.490723 MiB 00:04:29.034 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:29.034 element at address: 0x200007000000 with size: 0.481934 MiB 00:04:29.034 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:04:29.034 element at address: 0x200003a00000 with size: 0.354858 MiB 00:04:29.034 list of standard malloc elements. size: 199.218628 MiB 00:04:29.034 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:29.034 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:29.034 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:29.034 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:29.034 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:29.034 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:29.034 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:29.034 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:29.034 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:29.034 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003aff880 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:29.034 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:29.034 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:29.034 list of memzone associated elements. size: 646.796692 MiB 00:04:29.034 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:29.034 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:29.034 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:29.034 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:29.034 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:29.034 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3255400_0 00:04:29.034 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:29.034 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3255400_0 00:04:29.034 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:29.034 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3255400_0 00:04:29.034 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:29.034 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3255400_0 00:04:29.034 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:29.034 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:29.034 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:29.034 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:29.034 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:29.034 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3255400 00:04:29.034 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:29.034 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3255400 00:04:29.034 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:29.034 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3255400 00:04:29.034 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:29.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:29.034 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:29.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:29.034 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:29.034 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:29.034 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:29.034 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:29.034 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:29.034 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3255400 00:04:29.034 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:29.034 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3255400 00:04:29.034 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:29.034 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3255400 00:04:29.034 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:29.034 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3255400 00:04:29.034 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:04:29.034 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3255400 00:04:29.034 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:04:29.034 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3255400 00:04:29.034 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:29.034 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:29.034 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:29.034 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:29.034 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:29.034 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:29.034 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:04:29.034 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3255400 00:04:29.034 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:29.034 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:29.034 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:04:29.034 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:29.034 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:04:29.034 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3255400 00:04:29.034 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:04:29.034 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:29.034 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:29.034 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3255400 00:04:29.034 element at address: 0x200003aff940 with size: 0.000305 MiB 00:04:29.034 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3255400 00:04:29.034 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:04:29.034 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3255400 00:04:29.034 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:04:29.034 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:29.034 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:29.034 21:52:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3255400 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3255400 ']' 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3255400 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3255400 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3255400' 00:04:29.034 killing process with pid 3255400 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3255400 00:04:29.034 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3255400 00:04:29.296 00:04:29.296 real 0m1.414s 00:04:29.296 user 0m1.477s 00:04:29.296 sys 0m0.435s 00:04:29.296 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.296 21:52:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 ************************************ 00:04:29.296 END TEST dpdk_mem_utility 00:04:29.296 ************************************ 00:04:29.296 21:52:47 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:29.296 21:52:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.296 21:52:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.296 21:52:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.296 ************************************ 00:04:29.296 START TEST event 00:04:29.296 ************************************ 00:04:29.296 21:52:47 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:29.296 * Looking for test storage... 00:04:29.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:29.296 21:52:47 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:29.296 21:52:47 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:29.296 21:52:47 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:29.572 21:52:47 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:29.572 21:52:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.572 21:52:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.572 21:52:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.572 21:52:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.573 21:52:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.573 21:52:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.573 21:52:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.573 21:52:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.573 21:52:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.573 21:52:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.573 21:52:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.573 21:52:47 event -- scripts/common.sh@344 -- # case "$op" in 00:04:29.573 21:52:47 event -- scripts/common.sh@345 -- # : 1 00:04:29.573 21:52:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.573 21:52:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.573 21:52:47 event -- scripts/common.sh@365 -- # decimal 1 00:04:29.573 21:52:47 event -- scripts/common.sh@353 -- # local d=1 00:04:29.573 21:52:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.573 21:52:47 event -- scripts/common.sh@355 -- # echo 1 00:04:29.573 21:52:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.573 21:52:47 event -- scripts/common.sh@366 -- # decimal 2 00:04:29.573 21:52:47 event -- scripts/common.sh@353 -- # local d=2 00:04:29.573 21:52:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.573 21:52:47 event -- scripts/common.sh@355 -- # echo 2 00:04:29.573 21:52:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.573 21:52:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.573 21:52:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.573 21:52:47 event -- scripts/common.sh@368 -- # return 0 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.573 --rc genhtml_branch_coverage=1 00:04:29.573 --rc genhtml_function_coverage=1 00:04:29.573 --rc genhtml_legend=1 00:04:29.573 --rc geninfo_all_blocks=1 00:04:29.573 --rc geninfo_unexecuted_blocks=1 00:04:29.573 00:04:29.573 ' 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.573 --rc genhtml_branch_coverage=1 00:04:29.573 --rc genhtml_function_coverage=1 00:04:29.573 --rc genhtml_legend=1 00:04:29.573 --rc geninfo_all_blocks=1 00:04:29.573 --rc geninfo_unexecuted_blocks=1 00:04:29.573 00:04:29.573 ' 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.573 --rc genhtml_branch_coverage=1 00:04:29.573 --rc genhtml_function_coverage=1 00:04:29.573 --rc genhtml_legend=1 00:04:29.573 --rc geninfo_all_blocks=1 00:04:29.573 --rc geninfo_unexecuted_blocks=1 00:04:29.573 00:04:29.573 ' 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:29.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.573 --rc genhtml_branch_coverage=1 00:04:29.573 --rc genhtml_function_coverage=1 00:04:29.573 --rc genhtml_legend=1 00:04:29.573 --rc geninfo_all_blocks=1 00:04:29.573 --rc geninfo_unexecuted_blocks=1 00:04:29.573 00:04:29.573 ' 00:04:29.573 21:52:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:29.573 21:52:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.573 21:52:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:29.573 21:52:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.573 21:52:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.573 ************************************ 00:04:29.573 START TEST event_perf 00:04:29.573 ************************************ 00:04:29.573 21:52:47 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.573 Running I/O for 1 seconds...[2024-10-12 21:52:47.927300] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:29.573 [2024-10-12 21:52:47.927390] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255715 ] 00:04:29.573 [2024-10-12 21:52:48.014915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.573 [2024-10-12 21:52:48.057438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.573 [2024-10-12 21:52:48.057596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.573 [2024-10-12 21:52:48.057755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.573 Running I/O for 1 seconds...[2024-10-12 21:52:48.057756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.962 00:04:30.962 lcore 0: 184818 00:04:30.962 lcore 1: 184820 00:04:30.962 lcore 2: 184816 00:04:30.962 lcore 3: 184816 00:04:30.962 done. 00:04:30.962 00:04:30.962 real 0m1.188s 00:04:30.962 user 0m4.086s 00:04:30.962 sys 0m0.099s 00:04:30.962 21:52:49 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.962 21:52:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.962 ************************************ 00:04:30.962 END TEST event_perf 00:04:30.962 ************************************ 00:04:30.962 21:52:49 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.962 21:52:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:30.962 21:52:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.962 21:52:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.962 ************************************ 00:04:30.962 START TEST event_reactor 00:04:30.962 ************************************ 00:04:30.962 21:52:49 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.962 [2024-10-12 21:52:49.191231] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:30.962 [2024-10-12 21:52:49.191319] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255955 ] 00:04:30.962 [2024-10-12 21:52:49.271773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.962 [2024-10-12 21:52:49.300828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.973 test_start 00:04:31.973 oneshot 00:04:31.973 tick 100 00:04:31.973 tick 100 00:04:31.973 tick 250 00:04:31.973 tick 100 00:04:31.973 tick 100 00:04:31.973 tick 100 00:04:31.973 tick 250 00:04:31.973 tick 500 00:04:31.973 tick 100 00:04:31.973 tick 100 00:04:31.973 tick 250 00:04:31.973 tick 100 00:04:31.973 tick 100 00:04:31.973 test_end 00:04:31.973 00:04:31.973 real 0m1.167s 00:04:31.973 user 0m1.081s 00:04:31.973 sys 0m0.081s 00:04:31.973 21:52:50 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.973 21:52:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:31.973 ************************************ 00:04:31.973 END TEST event_reactor 00:04:31.973 ************************************ 00:04:31.973 21:52:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.973 21:52:50 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:31.973 21:52:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.973 21:52:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.973 ************************************ 00:04:31.973 START TEST event_reactor_perf 00:04:31.973 ************************************ 00:04:31.973 21:52:50 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.973 [2024-10-12 21:52:50.432199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:31.973 [2024-10-12 21:52:50.432281] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256311 ] 00:04:32.361 [2024-10-12 21:52:50.514924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.361 [2024-10-12 21:52:50.542757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.301 test_start 00:04:33.301 test_end 00:04:33.301 Performance: 539896 events per second 00:04:33.301 00:04:33.301 real 0m1.165s 00:04:33.301 user 0m1.077s 00:04:33.301 sys 0m0.083s 00:04:33.301 21:52:51 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.301 21:52:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.301 ************************************ 00:04:33.301 END TEST event_reactor_perf 00:04:33.301 ************************************ 00:04:33.301 21:52:51 event -- event/event.sh@49 -- # uname -s 00:04:33.301 21:52:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.301 21:52:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.301 21:52:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.301 21:52:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.301 21:52:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.301 ************************************ 00:04:33.301 START TEST event_scheduler 00:04:33.301 ************************************ 00:04:33.301 21:52:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.301 * Looking for test storage... 00:04:33.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:33.301 21:52:51 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.301 21:52:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.301 21:52:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.561 21:52:51 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.561 21:52:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.562 21:52:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.562 --rc genhtml_branch_coverage=1 00:04:33.562 --rc genhtml_function_coverage=1 00:04:33.562 --rc genhtml_legend=1 00:04:33.562 --rc geninfo_all_blocks=1 00:04:33.562 --rc geninfo_unexecuted_blocks=1 00:04:33.562 00:04:33.562 ' 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.562 --rc genhtml_branch_coverage=1 00:04:33.562 --rc genhtml_function_coverage=1 00:04:33.562 --rc genhtml_legend=1 00:04:33.562 --rc geninfo_all_blocks=1 00:04:33.562 --rc geninfo_unexecuted_blocks=1 00:04:33.562 00:04:33.562 ' 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.562 --rc genhtml_branch_coverage=1 00:04:33.562 --rc genhtml_function_coverage=1 00:04:33.562 --rc genhtml_legend=1 00:04:33.562 --rc geninfo_all_blocks=1 00:04:33.562 --rc geninfo_unexecuted_blocks=1 00:04:33.562 00:04:33.562 ' 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.562 --rc genhtml_branch_coverage=1 00:04:33.562 --rc genhtml_function_coverage=1 00:04:33.562 --rc genhtml_legend=1 00:04:33.562 --rc geninfo_all_blocks=1 00:04:33.562 --rc geninfo_unexecuted_blocks=1 00:04:33.562 00:04:33.562 ' 00:04:33.562 21:52:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.562 21:52:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3256701 00:04:33.562 21:52:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.562 21:52:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3256701 00:04:33.562 21:52:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3256701 ']' 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.562 21:52:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.562 [2024-10-12 21:52:51.908897] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:33.562 [2024-10-12 21:52:51.908968] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256701 ] 00:04:33.562 [2024-10-12 21:52:51.990004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.822 [2024-10-12 21:52:52.060374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.822 [2024-10-12 21:52:52.060537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.822 [2024-10-12 21:52:52.060697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.822 [2024-10-12 21:52:52.060699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:34.392 21:52:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 [2024-10-12 21:52:52.719200] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:34.392 [2024-10-12 21:52:52.719224] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.392 [2024-10-12 21:52:52.719233] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.392 [2024-10-12 21:52:52.719240] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.392 [2024-10-12 21:52:52.719245] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 [2024-10-12 21:52:52.774342] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 ************************************ 00:04:34.392 START TEST scheduler_create_thread 00:04:34.392 ************************************ 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 2 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 3 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 4 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 5 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.392 6 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.392 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.653 7 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.653 8 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.653 21:52:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.914 9 00:04:34.914 21:52:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.914 21:52:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.914 21:52:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.914 21:52:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.297 10 00:04:36.297 21:52:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.297 21:52:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:36.297 21:52:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.297 21:52:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.237 21:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.237 21:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.237 21:52:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.237 21:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.237 21:52:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.808 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.808 21:52:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:37.808 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.808 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.379 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.379 21:52:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.379 21:52:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.379 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.379 21:52:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.950 21:52:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.950 00:04:38.950 real 0m4.566s 00:04:38.950 user 0m0.025s 00:04:38.950 sys 0m0.005s 00:04:38.950 21:52:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.950 21:52:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.950 ************************************ 00:04:38.950 END TEST scheduler_create_thread 00:04:38.950 ************************************ 00:04:38.950 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:38.950 21:52:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3256701 00:04:38.950 21:52:57 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3256701 ']' 00:04:38.950 21:52:57 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3256701 00:04:38.950 21:52:57 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:38.950 21:52:57 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.950 21:52:57 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3256701 00:04:39.210 21:52:57 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:39.210 21:52:57 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:39.210 21:52:57 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3256701' 00:04:39.210 killing process with pid 3256701 00:04:39.210 21:52:57 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3256701 00:04:39.210 21:52:57 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3256701 00:04:39.210 [2024-10-12 21:52:57.561193] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.472 00:04:39.472 real 0m6.083s 00:04:39.472 user 0m15.054s 00:04:39.472 sys 0m0.427s 00:04:39.472 21:52:57 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.472 21:52:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 ************************************ 00:04:39.472 END TEST event_scheduler 00:04:39.472 ************************************ 00:04:39.472 21:52:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.472 21:52:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.472 21:52:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.472 21:52:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.472 21:52:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 ************************************ 00:04:39.472 START TEST app_repeat 00:04:39.472 ************************************ 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3257795 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3257795' 00:04:39.472 Process app_repeat pid: 3257795 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.472 spdk_app_start Round 0 00:04:39.472 21:52:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3257795 /var/tmp/spdk-nbd.sock 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3257795 ']' 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.472 21:52:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.472 [2024-10-12 21:52:57.864528] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:39.472 [2024-10-12 21:52:57.864598] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257795 ] 00:04:39.472 [2024-10-12 21:52:57.943192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.733 [2024-10-12 21:52:57.985346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.733 [2024-10-12 21:52:57.985346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.733 21:52:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.733 21:52:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:39.733 21:52:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.994 Malloc0 00:04:39.994 21:52:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.994 Malloc1 00:04:39.994 21:52:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.994 21:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.254 /dev/nbd0 00:04:40.254 21:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.254 21:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:40.254 21:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.255 1+0 records in 00:04:40.255 1+0 records out 00:04:40.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296379 s, 13.8 MB/s 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:40.255 21:52:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:40.255 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.255 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.255 21:52:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.514 /dev/nbd1 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.515 1+0 records in 00:04:40.515 1+0 records out 00:04:40.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276546 s, 14.8 MB/s 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:40.515 21:52:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.515 21:52:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.776 { 00:04:40.776 "nbd_device": "/dev/nbd0", 00:04:40.776 "bdev_name": "Malloc0" 00:04:40.776 }, 00:04:40.776 { 00:04:40.776 "nbd_device": "/dev/nbd1", 00:04:40.776 "bdev_name": "Malloc1" 00:04:40.776 } 00:04:40.776 ]' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.776 { 00:04:40.776 "nbd_device": "/dev/nbd0", 00:04:40.776 "bdev_name": "Malloc0" 00:04:40.776 }, 00:04:40.776 { 00:04:40.776 "nbd_device": "/dev/nbd1", 00:04:40.776 "bdev_name": "Malloc1" 00:04:40.776 } 00:04:40.776 ]' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.776 /dev/nbd1' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.776 /dev/nbd1' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.776 256+0 records in 00:04:40.776 256+0 records out 00:04:40.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508784 s, 206 MB/s 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.776 256+0 records in 00:04:40.776 256+0 records out 00:04:40.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118175 s, 88.7 MB/s 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.776 256+0 records in 00:04:40.776 256+0 records out 00:04:40.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125956 s, 83.2 MB/s 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.776 21:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.037 21:52:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.297 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.557 21:52:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.557 21:52:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.557 21:53:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.817 [2024-10-12 21:53:00.090385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.817 [2024-10-12 21:53:00.120110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.817 [2024-10-12 21:53:00.120118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.817 [2024-10-12 21:53:00.149201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.817 [2024-10-12 21:53:00.149233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.116 spdk_app_start Round 1 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3257795 /var/tmp/spdk-nbd.sock 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3257795 ']' 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.116 21:53:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.116 Malloc0 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.116 Malloc1 00:04:45.116 21:53:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.116 21:53:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.117 21:53:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.377 /dev/nbd0 00:04:45.377 21:53:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.377 21:53:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.377 1+0 records in 00:04:45.377 1+0 records out 00:04:45.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274841 s, 14.9 MB/s 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:45.377 21:53:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:45.377 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.377 21:53:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.377 21:53:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.637 /dev/nbd1 00:04:45.637 21:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.637 21:53:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.637 21:53:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:45.637 21:53:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:45.637 21:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:45.637 21:53:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.638 1+0 records in 00:04:45.638 1+0 records out 00:04:45.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278969 s, 14.7 MB/s 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:45.638 21:53:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:45.638 21:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.638 21:53:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.638 21:53:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.638 21:53:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.638 21:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.898 { 00:04:45.898 "nbd_device": "/dev/nbd0", 00:04:45.898 "bdev_name": "Malloc0" 00:04:45.898 }, 00:04:45.898 { 00:04:45.898 "nbd_device": "/dev/nbd1", 00:04:45.898 "bdev_name": "Malloc1" 00:04:45.898 } 00:04:45.898 ]' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.898 { 00:04:45.898 "nbd_device": "/dev/nbd0", 00:04:45.898 "bdev_name": "Malloc0" 00:04:45.898 }, 00:04:45.898 { 00:04:45.898 "nbd_device": "/dev/nbd1", 00:04:45.898 "bdev_name": "Malloc1" 00:04:45.898 } 00:04:45.898 ]' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.898 /dev/nbd1' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.898 /dev/nbd1' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.898 256+0 records in 00:04:45.898 256+0 records out 00:04:45.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120039 s, 87.4 MB/s 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.898 256+0 records in 00:04:45.898 256+0 records out 00:04:45.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120993 s, 86.7 MB/s 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.898 256+0 records in 00:04:45.898 256+0 records out 00:04:45.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131951 s, 79.5 MB/s 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.898 21:53:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.159 21:53:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.419 21:53:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.679 21:53:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.679 21:53:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.679 21:53:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.939 [2024-10-12 21:53:05.229007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.939 [2024-10-12 21:53:05.255326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.939 [2024-10-12 21:53:05.255326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.939 [2024-10-12 21:53:05.284719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.939 [2024-10-12 21:53:05.284751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.236 spdk_app_start Round 2 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3257795 /var/tmp/spdk-nbd.sock 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3257795 ']' 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.236 21:53:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.236 Malloc0 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.236 Malloc1 00:04:50.236 21:53:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.236 21:53:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.497 /dev/nbd0 00:04:50.497 21:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.497 21:53:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.497 1+0 records in 00:04:50.497 1+0 records out 00:04:50.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275596 s, 14.9 MB/s 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:50.497 21:53:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:50.497 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.497 21:53:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.497 21:53:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.758 /dev/nbd1 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.758 1+0 records in 00:04:50.758 1+0 records out 00:04:50.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0045371 s, 903 kB/s 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:50.758 21:53:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.758 21:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.019 { 00:04:51.019 "nbd_device": "/dev/nbd0", 00:04:51.019 "bdev_name": "Malloc0" 00:04:51.019 }, 00:04:51.019 { 00:04:51.019 "nbd_device": "/dev/nbd1", 00:04:51.019 "bdev_name": "Malloc1" 00:04:51.019 } 00:04:51.019 ]' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.019 { 00:04:51.019 "nbd_device": "/dev/nbd0", 00:04:51.019 "bdev_name": "Malloc0" 00:04:51.019 }, 00:04:51.019 { 00:04:51.019 "nbd_device": "/dev/nbd1", 00:04:51.019 "bdev_name": "Malloc1" 00:04:51.019 } 00:04:51.019 ]' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.019 /dev/nbd1' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.019 /dev/nbd1' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.019 256+0 records in 00:04:51.019 256+0 records out 00:04:51.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127504 s, 82.2 MB/s 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.019 256+0 records in 00:04:51.019 256+0 records out 00:04:51.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120826 s, 86.8 MB/s 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.019 256+0 records in 00:04:51.019 256+0 records out 00:04:51.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130782 s, 80.2 MB/s 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.019 21:53:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.280 21:53:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.540 21:53:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.800 21:53:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.800 21:53:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.800 21:53:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:52.061 [2024-10-12 21:53:10.357451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.061 [2024-10-12 21:53:10.384164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.061 [2024-10-12 21:53:10.384164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.061 [2024-10-12 21:53:10.413142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.061 [2024-10-12 21:53:10.413175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.357 21:53:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3257795 /var/tmp/spdk-nbd.sock 00:04:55.357 21:53:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3257795 ']' 00:04:55.357 21:53:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.357 21:53:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.357 21:53:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:55.358 21:53:13 event.app_repeat -- event/event.sh@39 -- # killprocess 3257795 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3257795 ']' 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3257795 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3257795 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3257795' 00:04:55.358 killing process with pid 3257795 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3257795 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3257795 00:04:55.358 spdk_app_start is called in Round 0. 00:04:55.358 Shutdown signal received, stop current app iteration 00:04:55.358 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:04:55.358 spdk_app_start is called in Round 1. 00:04:55.358 Shutdown signal received, stop current app iteration 00:04:55.358 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:04:55.358 spdk_app_start is called in Round 2. 00:04:55.358 Shutdown signal received, stop current app iteration 00:04:55.358 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:04:55.358 spdk_app_start is called in Round 3. 00:04:55.358 Shutdown signal received, stop current app iteration 00:04:55.358 21:53:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:55.358 21:53:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:55.358 00:04:55.358 real 0m15.788s 00:04:55.358 user 0m34.636s 00:04:55.358 sys 0m2.284s 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.358 21:53:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.358 ************************************ 00:04:55.358 END TEST app_repeat 00:04:55.358 ************************************ 00:04:55.358 21:53:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:55.358 21:53:13 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:55.358 21:53:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.358 21:53:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.358 21:53:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.358 ************************************ 00:04:55.358 START TEST cpu_locks 00:04:55.358 ************************************ 00:04:55.358 21:53:13 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:55.358 * Looking for test storage... 00:04:55.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:55.358 21:53:13 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:55.358 21:53:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:55.358 21:53:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:55.618 21:53:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:55.618 21:53:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.619 21:53:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.619 --rc genhtml_branch_coverage=1 00:04:55.619 --rc genhtml_function_coverage=1 00:04:55.619 --rc genhtml_legend=1 00:04:55.619 --rc geninfo_all_blocks=1 00:04:55.619 --rc geninfo_unexecuted_blocks=1 00:04:55.619 00:04:55.619 ' 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.619 --rc genhtml_branch_coverage=1 00:04:55.619 --rc genhtml_function_coverage=1 00:04:55.619 --rc genhtml_legend=1 00:04:55.619 --rc geninfo_all_blocks=1 00:04:55.619 --rc geninfo_unexecuted_blocks=1 00:04:55.619 00:04:55.619 ' 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.619 --rc genhtml_branch_coverage=1 00:04:55.619 --rc genhtml_function_coverage=1 00:04:55.619 --rc genhtml_legend=1 00:04:55.619 --rc geninfo_all_blocks=1 00:04:55.619 --rc geninfo_unexecuted_blocks=1 00:04:55.619 00:04:55.619 ' 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:55.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.619 --rc genhtml_branch_coverage=1 00:04:55.619 --rc genhtml_function_coverage=1 00:04:55.619 --rc genhtml_legend=1 00:04:55.619 --rc geninfo_all_blocks=1 00:04:55.619 --rc geninfo_unexecuted_blocks=1 00:04:55.619 00:04:55.619 ' 00:04:55.619 21:53:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:55.619 21:53:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:55.619 21:53:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:55.619 21:53:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.619 21:53:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 START TEST default_locks 00:04:55.619 ************************************ 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3261354 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3261354 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3261354 ']' 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.619 21:53:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 [2024-10-12 21:53:14.001410] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:55.619 [2024-10-12 21:53:14.001460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261354 ] 00:04:55.619 [2024-10-12 21:53:14.079260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.879 [2024-10-12 21:53:14.109635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.449 21:53:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.449 21:53:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:56.449 21:53:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3261354 00:04:56.449 21:53:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3261354 00:04:56.449 21:53:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.026 lslocks: write error 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3261354 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3261354 ']' 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3261354 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261354 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261354' 00:04:57.026 killing process with pid 3261354 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3261354 00:04:57.026 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3261354 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3261354 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3261354 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3261354 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3261354 ']' 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3261354) - No such process 00:04:57.287 ERROR: process (pid: 3261354) is no longer running 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:57.287 00:04:57.287 real 0m1.600s 00:04:57.287 user 0m1.712s 00:04:57.287 sys 0m0.578s 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.287 21:53:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 ************************************ 00:04:57.287 END TEST default_locks 00:04:57.287 ************************************ 00:04:57.287 21:53:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:57.287 21:53:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.287 21:53:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.287 21:53:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 ************************************ 00:04:57.287 START TEST default_locks_via_rpc 00:04:57.287 ************************************ 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3261724 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3261724 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3261724 ']' 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.287 21:53:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 [2024-10-12 21:53:15.664431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:57.287 [2024-10-12 21:53:15.664478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261724 ] 00:04:57.287 [2024-10-12 21:53:15.738690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.287 [2024-10-12 21:53:15.766988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3261724 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3261724 00:04:58.229 21:53:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3261724 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3261724 ']' 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3261724 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261724 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261724' 00:04:58.799 killing process with pid 3261724 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3261724 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3261724 00:04:58.799 00:04:58.799 real 0m1.680s 00:04:58.799 user 0m1.816s 00:04:58.799 sys 0m0.579s 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.799 21:53:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.799 ************************************ 00:04:58.799 END TEST default_locks_via_rpc 00:04:58.799 ************************************ 00:04:59.061 21:53:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:59.061 21:53:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.061 21:53:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.061 21:53:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.061 ************************************ 00:04:59.061 START TEST non_locking_app_on_locked_coremask 00:04:59.061 ************************************ 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3262093 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3262093 /var/tmp/spdk.sock 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3262093 ']' 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.061 21:53:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.061 [2024-10-12 21:53:17.421091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:04:59.061 [2024-10-12 21:53:17.421144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262093 ] 00:04:59.061 [2024-10-12 21:53:17.497873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.061 [2024-10-12 21:53:17.526395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3262234 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3262234 /var/tmp/spdk2.sock 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3262234 ']' 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.003 21:53:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.003 [2024-10-12 21:53:18.263932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:00.003 [2024-10-12 21:53:18.263987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262234 ] 00:05:00.003 [2024-10-12 21:53:18.336915] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.003 [2024-10-12 21:53:18.336943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.003 [2024-10-12 21:53:18.395652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.574 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.574 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:00.574 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3262093 00:05:00.574 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3262093 00:05:00.574 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.516 lslocks: write error 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3262093 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3262093 ']' 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3262093 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262093 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262093' 00:05:01.516 killing process with pid 3262093 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3262093 00:05:01.516 21:53:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3262093 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3262234 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3262234 ']' 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3262234 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262234 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262234' 00:05:01.777 killing process with pid 3262234 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3262234 00:05:01.777 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3262234 00:05:02.038 00:05:02.038 real 0m3.008s 00:05:02.038 user 0m3.339s 00:05:02.038 sys 0m0.946s 00:05:02.038 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.038 21:53:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.038 ************************************ 00:05:02.038 END TEST non_locking_app_on_locked_coremask 00:05:02.038 ************************************ 00:05:02.038 21:53:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:02.038 21:53:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.038 21:53:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.038 21:53:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.038 ************************************ 00:05:02.038 START TEST locking_app_on_unlocked_coremask 00:05:02.038 ************************************ 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3262804 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3262804 /var/tmp/spdk.sock 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3262804 ']' 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.038 21:53:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.038 [2024-10-12 21:53:20.510750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:02.038 [2024-10-12 21:53:20.510806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262804 ] 00:05:02.298 [2024-10-12 21:53:20.589743] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.298 [2024-10-12 21:53:20.589779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.298 [2024-10-12 21:53:20.629000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3262819 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3262819 /var/tmp/spdk2.sock 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3262819 ']' 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.869 21:53:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.130 [2024-10-12 21:53:21.372913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:03.130 [2024-10-12 21:53:21.372967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262819 ] 00:05:03.130 [2024-10-12 21:53:21.444935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.130 [2024-10-12 21:53:21.501444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.702 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.702 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:03.702 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3262819 00:05:03.702 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3262819 00:05:03.702 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.962 lslocks: write error 00:05:03.962 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3262804 00:05:03.962 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3262804 ']' 00:05:03.962 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3262804 00:05:03.962 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262804 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262804' 00:05:04.222 killing process with pid 3262804 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3262804 00:05:04.222 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3262804 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3262819 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3262819 ']' 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3262819 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3262819 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3262819' 00:05:04.482 killing process with pid 3262819 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3262819 00:05:04.482 21:53:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3262819 00:05:04.742 00:05:04.742 real 0m2.698s 00:05:04.742 user 0m2.995s 00:05:04.742 sys 0m0.848s 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.742 ************************************ 00:05:04.742 END TEST locking_app_on_unlocked_coremask 00:05:04.742 ************************************ 00:05:04.742 21:53:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:04.742 21:53:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.742 21:53:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.742 21:53:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.742 ************************************ 00:05:04.742 START TEST locking_app_on_locked_coremask 00:05:04.742 ************************************ 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3263284 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3263284 /var/tmp/spdk.sock 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3263284 ']' 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.742 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.003 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.003 21:53:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.003 [2024-10-12 21:53:23.289036] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:05.003 [2024-10-12 21:53:23.289093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263284 ] 00:05:05.003 [2024-10-12 21:53:23.371649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.003 [2024-10-12 21:53:23.412877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3263519 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3263519 /var/tmp/spdk2.sock 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3263519 /var/tmp/spdk2.sock 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3263519 /var/tmp/spdk2.sock 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3263519 ']' 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.943 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.943 [2024-10-12 21:53:24.139773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:05.943 [2024-10-12 21:53:24.139824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263519 ] 00:05:05.943 [2024-10-12 21:53:24.209502] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3263284 has claimed it. 00:05:05.943 [2024-10-12 21:53:24.209536] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:06.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3263519) - No such process 00:05:06.514 ERROR: process (pid: 3263519) is no longer running 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3263284 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3263284 00:05:06.514 21:53:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.084 lslocks: write error 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3263284 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3263284 ']' 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3263284 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3263284 00:05:07.084 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.085 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.085 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3263284' 00:05:07.085 killing process with pid 3263284 00:05:07.085 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3263284 00:05:07.085 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3263284 00:05:07.345 00:05:07.345 real 0m2.363s 00:05:07.345 user 0m2.588s 00:05:07.345 sys 0m0.717s 00:05:07.345 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.345 21:53:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.345 ************************************ 00:05:07.345 END TEST locking_app_on_locked_coremask 00:05:07.345 ************************************ 00:05:07.345 21:53:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:07.345 21:53:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.345 21:53:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.346 21:53:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.346 ************************************ 00:05:07.346 START TEST locking_overlapped_coremask 00:05:07.346 ************************************ 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3263884 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3263884 /var/tmp/spdk.sock 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3263884 ']' 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.346 21:53:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.346 [2024-10-12 21:53:25.723260] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:07.346 [2024-10-12 21:53:25.723308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263884 ] 00:05:07.346 [2024-10-12 21:53:25.798279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:07.346 [2024-10-12 21:53:25.828737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.346 [2024-10-12 21:53:25.828922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.346 [2024-10-12 21:53:25.828923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3263942 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3263942 /var/tmp/spdk2.sock 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3263942 /var/tmp/spdk2.sock 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3263942 /var/tmp/spdk2.sock 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3263942 ']' 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.289 21:53:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.289 [2024-10-12 21:53:26.565532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:08.289 [2024-10-12 21:53:26.565583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263942 ] 00:05:08.289 [2024-10-12 21:53:26.654964] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3263884 has claimed it. 00:05:08.289 [2024-10-12 21:53:26.655004] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3263942) - No such process 00:05:08.861 ERROR: process (pid: 3263942) is no longer running 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3263884 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3263884 ']' 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3263884 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3263884 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3263884' 00:05:08.861 killing process with pid 3263884 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3263884 00:05:08.861 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3263884 00:05:09.122 00:05:09.122 real 0m1.787s 00:05:09.122 user 0m5.161s 00:05:09.122 sys 0m0.400s 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.122 ************************************ 00:05:09.122 END TEST locking_overlapped_coremask 00:05:09.122 ************************************ 00:05:09.122 21:53:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:09.122 21:53:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.122 21:53:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.122 21:53:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.122 ************************************ 00:05:09.122 START TEST locking_overlapped_coremask_via_rpc 00:05:09.122 ************************************ 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3264260 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3264260 /var/tmp/spdk.sock 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3264260 ']' 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.122 21:53:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.122 [2024-10-12 21:53:27.586963] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:09.122 [2024-10-12 21:53:27.587015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264260 ] 00:05:09.383 [2024-10-12 21:53:27.665334] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.383 [2024-10-12 21:53:27.665361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.383 [2024-10-12 21:53:27.696412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.383 [2024-10-12 21:53:27.696622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.383 [2024-10-12 21:53:27.696623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3264408 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3264408 /var/tmp/spdk2.sock 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3264408 ']' 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.971 21:53:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.971 [2024-10-12 21:53:28.434434] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:09.971 [2024-10-12 21:53:28.434488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264408 ] 00:05:10.232 [2024-10-12 21:53:28.524383] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.232 [2024-10-12 21:53:28.524407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.232 [2024-10-12 21:53:28.588636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.232 [2024-10-12 21:53:28.592225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.232 [2024-10-12 21:53:28.592227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.803 [2024-10-12 21:53:29.236178] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3264260 has claimed it. 00:05:10.803 request: 00:05:10.803 { 00:05:10.803 "method": "framework_enable_cpumask_locks", 00:05:10.803 "req_id": 1 00:05:10.803 } 00:05:10.803 Got JSON-RPC error response 00:05:10.803 response: 00:05:10.803 { 00:05:10.803 "code": -32603, 00:05:10.803 "message": "Failed to claim CPU core: 2" 00:05:10.803 } 00:05:10.803 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3264260 /var/tmp/spdk.sock 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3264260 ']' 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.804 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3264408 /var/tmp/spdk2.sock 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3264408 ']' 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.064 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.324 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:11.325 00:05:11.325 real 0m2.072s 00:05:11.325 user 0m0.866s 00:05:11.325 sys 0m0.142s 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.325 21:53:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.325 ************************************ 00:05:11.325 END TEST locking_overlapped_coremask_via_rpc 00:05:11.325 ************************************ 00:05:11.325 21:53:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:11.325 21:53:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3264260 ]] 00:05:11.325 21:53:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3264260 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3264260 ']' 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3264260 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264260 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264260' 00:05:11.325 killing process with pid 3264260 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3264260 00:05:11.325 21:53:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3264260 00:05:11.585 21:53:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3264408 ]] 00:05:11.585 21:53:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3264408 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3264408 ']' 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3264408 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3264408 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3264408' 00:05:11.585 killing process with pid 3264408 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3264408 00:05:11.585 21:53:29 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3264408 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3264260 ]] 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3264260 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3264260 ']' 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3264260 00:05:11.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3264260) - No such process 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3264260 is not found' 00:05:11.846 Process with pid 3264260 is not found 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3264408 ]] 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3264408 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3264408 ']' 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3264408 00:05:11.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3264408) - No such process 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3264408 is not found' 00:05:11.846 Process with pid 3264408 is not found 00:05:11.846 21:53:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:11.846 00:05:11.846 real 0m16.480s 00:05:11.846 user 0m28.533s 00:05:11.846 sys 0m5.171s 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.846 21:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.846 ************************************ 00:05:11.846 END TEST cpu_locks 00:05:11.846 ************************************ 00:05:11.846 00:05:11.846 real 0m42.552s 00:05:11.846 user 1m24.789s 00:05:11.846 sys 0m8.539s 00:05:11.846 21:53:30 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.846 21:53:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.846 ************************************ 00:05:11.846 END TEST event 00:05:11.846 ************************************ 00:05:11.846 21:53:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:11.846 21:53:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.846 21:53:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.846 21:53:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.846 ************************************ 00:05:11.846 START TEST thread 00:05:11.846 ************************************ 00:05:11.846 21:53:30 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:12.108 * Looking for test storage... 00:05:12.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.108 21:53:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.108 21:53:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.108 21:53:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.108 21:53:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.108 21:53:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.108 21:53:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.108 21:53:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.108 21:53:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.108 21:53:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.108 21:53:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.108 21:53:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.108 21:53:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:12.108 21:53:30 thread -- scripts/common.sh@345 -- # : 1 00:05:12.108 21:53:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.108 21:53:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.108 21:53:30 thread -- scripts/common.sh@365 -- # decimal 1 00:05:12.108 21:53:30 thread -- scripts/common.sh@353 -- # local d=1 00:05:12.108 21:53:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.108 21:53:30 thread -- scripts/common.sh@355 -- # echo 1 00:05:12.108 21:53:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.108 21:53:30 thread -- scripts/common.sh@366 -- # decimal 2 00:05:12.108 21:53:30 thread -- scripts/common.sh@353 -- # local d=2 00:05:12.108 21:53:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.108 21:53:30 thread -- scripts/common.sh@355 -- # echo 2 00:05:12.108 21:53:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.108 21:53:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.108 21:53:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.108 21:53:30 thread -- scripts/common.sh@368 -- # return 0 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.108 --rc genhtml_branch_coverage=1 00:05:12.108 --rc genhtml_function_coverage=1 00:05:12.108 --rc genhtml_legend=1 00:05:12.108 --rc geninfo_all_blocks=1 00:05:12.108 --rc geninfo_unexecuted_blocks=1 00:05:12.108 00:05:12.108 ' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.108 --rc genhtml_branch_coverage=1 00:05:12.108 --rc genhtml_function_coverage=1 00:05:12.108 --rc genhtml_legend=1 00:05:12.108 --rc geninfo_all_blocks=1 00:05:12.108 --rc geninfo_unexecuted_blocks=1 00:05:12.108 00:05:12.108 ' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.108 --rc genhtml_branch_coverage=1 00:05:12.108 --rc genhtml_function_coverage=1 00:05:12.108 --rc genhtml_legend=1 00:05:12.108 --rc geninfo_all_blocks=1 00:05:12.108 --rc geninfo_unexecuted_blocks=1 00:05:12.108 00:05:12.108 ' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.108 --rc genhtml_branch_coverage=1 00:05:12.108 --rc genhtml_function_coverage=1 00:05:12.108 --rc genhtml_legend=1 00:05:12.108 --rc geninfo_all_blocks=1 00:05:12.108 --rc geninfo_unexecuted_blocks=1 00:05:12.108 00:05:12.108 ' 00:05:12.108 21:53:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.108 21:53:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.108 ************************************ 00:05:12.108 START TEST thread_poller_perf 00:05:12.108 ************************************ 00:05:12.108 21:53:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:12.108 [2024-10-12 21:53:30.550307] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:12.108 [2024-10-12 21:53:30.550390] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265044 ] 00:05:12.369 [2024-10-12 21:53:30.632953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.369 [2024-10-12 21:53:30.663616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.369 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:13.310 [2024-10-12T19:53:31.799Z] ====================================== 00:05:13.310 [2024-10-12T19:53:31.799Z] busy:2410220386 (cyc) 00:05:13.310 [2024-10-12T19:53:31.799Z] total_run_count: 418000 00:05:13.310 [2024-10-12T19:53:31.799Z] tsc_hz: 2400000000 (cyc) 00:05:13.310 [2024-10-12T19:53:31.799Z] ====================================== 00:05:13.310 [2024-10-12T19:53:31.799Z] poller_cost: 5766 (cyc), 2402 (nsec) 00:05:13.310 00:05:13.310 real 0m1.179s 00:05:13.310 user 0m1.091s 00:05:13.310 sys 0m0.085s 00:05:13.310 21:53:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.310 21:53:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.310 ************************************ 00:05:13.310 END TEST thread_poller_perf 00:05:13.310 ************************************ 00:05:13.310 21:53:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:13.310 21:53:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:13.310 21:53:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.310 21:53:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.310 ************************************ 00:05:13.310 START TEST thread_poller_perf 00:05:13.310 ************************************ 00:05:13.310 21:53:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:13.571 [2024-10-12 21:53:31.805221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:13.571 [2024-10-12 21:53:31.805311] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265229 ] 00:05:13.571 [2024-10-12 21:53:31.888424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.571 [2024-10-12 21:53:31.927259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.571 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:14.513 [2024-10-12T19:53:33.002Z] ====================================== 00:05:14.513 [2024-10-12T19:53:33.002Z] busy:2401599188 (cyc) 00:05:14.513 [2024-10-12T19:53:33.002Z] total_run_count: 5556000 00:05:14.513 [2024-10-12T19:53:33.002Z] tsc_hz: 2400000000 (cyc) 00:05:14.513 [2024-10-12T19:53:33.002Z] ====================================== 00:05:14.513 [2024-10-12T19:53:33.002Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:14.513 00:05:14.513 real 0m1.179s 00:05:14.513 user 0m1.077s 00:05:14.513 sys 0m0.098s 00:05:14.513 21:53:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.513 21:53:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.513 ************************************ 00:05:14.513 END TEST thread_poller_perf 00:05:14.513 ************************************ 00:05:14.513 21:53:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:14.773 00:05:14.773 real 0m2.711s 00:05:14.773 user 0m2.335s 00:05:14.773 sys 0m0.390s 00:05:14.773 21:53:33 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.773 21:53:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.773 ************************************ 00:05:14.773 END TEST thread 00:05:14.773 ************************************ 00:05:14.773 21:53:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:14.773 21:53:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:14.773 21:53:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.773 21:53:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.773 21:53:33 -- common/autotest_common.sh@10 -- # set +x 00:05:14.773 ************************************ 00:05:14.773 START TEST app_cmdline 00:05:14.774 ************************************ 00:05:14.774 21:53:33 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:14.774 * Looking for test storage... 00:05:14.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:14.774 21:53:33 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.774 21:53:33 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.774 21:53:33 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.034 21:53:33 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.034 21:53:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.035 21:53:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.035 --rc genhtml_branch_coverage=1 00:05:15.035 --rc genhtml_function_coverage=1 00:05:15.035 --rc genhtml_legend=1 00:05:15.035 --rc geninfo_all_blocks=1 00:05:15.035 --rc geninfo_unexecuted_blocks=1 00:05:15.035 00:05:15.035 ' 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.035 --rc genhtml_branch_coverage=1 00:05:15.035 --rc genhtml_function_coverage=1 00:05:15.035 --rc genhtml_legend=1 00:05:15.035 --rc geninfo_all_blocks=1 00:05:15.035 --rc geninfo_unexecuted_blocks=1 00:05:15.035 00:05:15.035 ' 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.035 --rc genhtml_branch_coverage=1 00:05:15.035 --rc genhtml_function_coverage=1 00:05:15.035 --rc genhtml_legend=1 00:05:15.035 --rc geninfo_all_blocks=1 00:05:15.035 --rc geninfo_unexecuted_blocks=1 00:05:15.035 00:05:15.035 ' 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.035 --rc genhtml_branch_coverage=1 00:05:15.035 --rc genhtml_function_coverage=1 00:05:15.035 --rc genhtml_legend=1 00:05:15.035 --rc geninfo_all_blocks=1 00:05:15.035 --rc geninfo_unexecuted_blocks=1 00:05:15.035 00:05:15.035 ' 00:05:15.035 21:53:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:15.035 21:53:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3265516 00:05:15.035 21:53:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3265516 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3265516 ']' 00:05:15.035 21:53:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.035 21:53:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.035 [2024-10-12 21:53:33.345369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:15.035 [2024-10-12 21:53:33.345445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265516 ] 00:05:15.035 [2024-10-12 21:53:33.441612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.035 [2024-10-12 21:53:33.475561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:15.977 { 00:05:15.977 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:05:15.977 "fields": { 00:05:15.977 "major": 24, 00:05:15.977 "minor": 9, 00:05:15.977 "patch": 1, 00:05:15.977 "suffix": "-pre", 00:05:15.977 "commit": "b18e1bd62" 00:05:15.977 } 00:05:15.977 } 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:15.977 21:53:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:15.977 21:53:34 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.238 request: 00:05:16.238 { 00:05:16.238 "method": "env_dpdk_get_mem_stats", 00:05:16.238 "req_id": 1 00:05:16.238 } 00:05:16.238 Got JSON-RPC error response 00:05:16.238 response: 00:05:16.238 { 00:05:16.238 "code": -32601, 00:05:16.238 "message": "Method not found" 00:05:16.238 } 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.238 21:53:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3265516 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3265516 ']' 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3265516 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3265516 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3265516' 00:05:16.238 killing process with pid 3265516 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@969 -- # kill 3265516 00:05:16.238 21:53:34 app_cmdline -- common/autotest_common.sh@974 -- # wait 3265516 00:05:16.499 00:05:16.499 real 0m1.700s 00:05:16.499 user 0m1.998s 00:05:16.499 sys 0m0.484s 00:05:16.499 21:53:34 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.499 21:53:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.499 ************************************ 00:05:16.499 END TEST app_cmdline 00:05:16.499 ************************************ 00:05:16.499 21:53:34 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:16.499 21:53:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.499 21:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.499 21:53:34 -- common/autotest_common.sh@10 -- # set +x 00:05:16.499 ************************************ 00:05:16.499 START TEST version 00:05:16.499 ************************************ 00:05:16.499 21:53:34 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:16.499 * Looking for test storage... 00:05:16.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:16.499 21:53:34 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.499 21:53:34 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.499 21:53:34 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.761 21:53:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.761 21:53:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.761 21:53:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.761 21:53:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.761 21:53:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.761 21:53:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.761 21:53:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.761 21:53:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.761 21:53:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.761 21:53:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.761 21:53:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.761 21:53:35 version -- scripts/common.sh@344 -- # case "$op" in 00:05:16.761 21:53:35 version -- scripts/common.sh@345 -- # : 1 00:05:16.761 21:53:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.761 21:53:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.761 21:53:35 version -- scripts/common.sh@365 -- # decimal 1 00:05:16.761 21:53:35 version -- scripts/common.sh@353 -- # local d=1 00:05:16.761 21:53:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.761 21:53:35 version -- scripts/common.sh@355 -- # echo 1 00:05:16.761 21:53:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.761 21:53:35 version -- scripts/common.sh@366 -- # decimal 2 00:05:16.761 21:53:35 version -- scripts/common.sh@353 -- # local d=2 00:05:16.761 21:53:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.761 21:53:35 version -- scripts/common.sh@355 -- # echo 2 00:05:16.761 21:53:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.761 21:53:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.761 21:53:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.761 21:53:35 version -- scripts/common.sh@368 -- # return 0 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.761 --rc genhtml_branch_coverage=1 00:05:16.761 --rc genhtml_function_coverage=1 00:05:16.761 --rc genhtml_legend=1 00:05:16.761 --rc geninfo_all_blocks=1 00:05:16.761 --rc geninfo_unexecuted_blocks=1 00:05:16.761 00:05:16.761 ' 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.761 --rc genhtml_branch_coverage=1 00:05:16.761 --rc genhtml_function_coverage=1 00:05:16.761 --rc genhtml_legend=1 00:05:16.761 --rc geninfo_all_blocks=1 00:05:16.761 --rc geninfo_unexecuted_blocks=1 00:05:16.761 00:05:16.761 ' 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.761 --rc genhtml_branch_coverage=1 00:05:16.761 --rc genhtml_function_coverage=1 00:05:16.761 --rc genhtml_legend=1 00:05:16.761 --rc geninfo_all_blocks=1 00:05:16.761 --rc geninfo_unexecuted_blocks=1 00:05:16.761 00:05:16.761 ' 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.761 --rc genhtml_branch_coverage=1 00:05:16.761 --rc genhtml_function_coverage=1 00:05:16.761 --rc genhtml_legend=1 00:05:16.761 --rc geninfo_all_blocks=1 00:05:16.761 --rc geninfo_unexecuted_blocks=1 00:05:16.761 00:05:16.761 ' 00:05:16.761 21:53:35 version -- app/version.sh@17 -- # get_header_version major 00:05:16.761 21:53:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # cut -f2 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.761 21:53:35 version -- app/version.sh@17 -- # major=24 00:05:16.761 21:53:35 version -- app/version.sh@18 -- # get_header_version minor 00:05:16.761 21:53:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # cut -f2 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.761 21:53:35 version -- app/version.sh@18 -- # minor=9 00:05:16.761 21:53:35 version -- app/version.sh@19 -- # get_header_version patch 00:05:16.761 21:53:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # cut -f2 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.761 21:53:35 version -- app/version.sh@19 -- # patch=1 00:05:16.761 21:53:35 version -- app/version.sh@20 -- # get_header_version suffix 00:05:16.761 21:53:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # cut -f2 00:05:16.761 21:53:35 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.761 21:53:35 version -- app/version.sh@20 -- # suffix=-pre 00:05:16.761 21:53:35 version -- app/version.sh@22 -- # version=24.9 00:05:16.761 21:53:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:16.761 21:53:35 version -- app/version.sh@25 -- # version=24.9.1 00:05:16.761 21:53:35 version -- app/version.sh@28 -- # version=24.9.1rc0 00:05:16.761 21:53:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:16.761 21:53:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:16.761 21:53:35 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:05:16.761 21:53:35 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:05:16.761 00:05:16.761 real 0m0.280s 00:05:16.761 user 0m0.167s 00:05:16.761 sys 0m0.164s 00:05:16.761 21:53:35 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.761 21:53:35 version -- common/autotest_common.sh@10 -- # set +x 00:05:16.761 ************************************ 00:05:16.761 END TEST version 00:05:16.761 ************************************ 00:05:16.761 21:53:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:16.761 21:53:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:16.761 21:53:35 -- spdk/autotest.sh@194 -- # uname -s 00:05:16.762 21:53:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:16.762 21:53:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.762 21:53:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.762 21:53:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:16.762 21:53:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.762 21:53:35 -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 21:53:35 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:16.762 21:53:35 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:16.762 21:53:35 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:16.762 21:53:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:16.762 21:53:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.762 21:53:35 -- common/autotest_common.sh@10 -- # set +x 00:05:17.022 ************************************ 00:05:17.022 START TEST nvmf_tcp 00:05:17.022 ************************************ 00:05:17.022 21:53:35 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:17.022 * Looking for test storage... 00:05:17.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:17.022 21:53:35 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.022 21:53:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.022 21:53:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.022 21:53:35 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.022 21:53:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.023 21:53:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.023 --rc genhtml_branch_coverage=1 00:05:17.023 --rc genhtml_function_coverage=1 00:05:17.023 --rc genhtml_legend=1 00:05:17.023 --rc geninfo_all_blocks=1 00:05:17.023 --rc geninfo_unexecuted_blocks=1 00:05:17.023 00:05:17.023 ' 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.023 --rc genhtml_branch_coverage=1 00:05:17.023 --rc genhtml_function_coverage=1 00:05:17.023 --rc genhtml_legend=1 00:05:17.023 --rc geninfo_all_blocks=1 00:05:17.023 --rc geninfo_unexecuted_blocks=1 00:05:17.023 00:05:17.023 ' 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.023 --rc genhtml_branch_coverage=1 00:05:17.023 --rc genhtml_function_coverage=1 00:05:17.023 --rc genhtml_legend=1 00:05:17.023 --rc geninfo_all_blocks=1 00:05:17.023 --rc geninfo_unexecuted_blocks=1 00:05:17.023 00:05:17.023 ' 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.023 --rc genhtml_branch_coverage=1 00:05:17.023 --rc genhtml_function_coverage=1 00:05:17.023 --rc genhtml_legend=1 00:05:17.023 --rc geninfo_all_blocks=1 00:05:17.023 --rc geninfo_unexecuted_blocks=1 00:05:17.023 00:05:17.023 ' 00:05:17.023 21:53:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:17.023 21:53:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:17.023 21:53:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.023 21:53:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.284 ************************************ 00:05:17.284 START TEST nvmf_target_core 00:05:17.284 ************************************ 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:17.284 * Looking for test storage... 00:05:17.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.284 --rc genhtml_branch_coverage=1 00:05:17.284 --rc genhtml_function_coverage=1 00:05:17.284 --rc genhtml_legend=1 00:05:17.284 --rc geninfo_all_blocks=1 00:05:17.284 --rc geninfo_unexecuted_blocks=1 00:05:17.284 00:05:17.284 ' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.284 --rc genhtml_branch_coverage=1 00:05:17.284 --rc genhtml_function_coverage=1 00:05:17.284 --rc genhtml_legend=1 00:05:17.284 --rc geninfo_all_blocks=1 00:05:17.284 --rc geninfo_unexecuted_blocks=1 00:05:17.284 00:05:17.284 ' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.284 --rc genhtml_branch_coverage=1 00:05:17.284 --rc genhtml_function_coverage=1 00:05:17.284 --rc genhtml_legend=1 00:05:17.284 --rc geninfo_all_blocks=1 00:05:17.284 --rc geninfo_unexecuted_blocks=1 00:05:17.284 00:05:17.284 ' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.284 --rc genhtml_branch_coverage=1 00:05:17.284 --rc genhtml_function_coverage=1 00:05:17.284 --rc genhtml_legend=1 00:05:17.284 --rc geninfo_all_blocks=1 00:05:17.284 --rc geninfo_unexecuted_blocks=1 00:05:17.284 00:05:17.284 ' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.284 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.285 21:53:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.547 ************************************ 00:05:17.547 START TEST nvmf_abort 00:05:17.547 ************************************ 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:17.547 * Looking for test storage... 00:05:17.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.547 --rc genhtml_branch_coverage=1 00:05:17.547 --rc genhtml_function_coverage=1 00:05:17.547 --rc genhtml_legend=1 00:05:17.547 --rc geninfo_all_blocks=1 00:05:17.547 --rc geninfo_unexecuted_blocks=1 00:05:17.547 00:05:17.547 ' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.547 --rc genhtml_branch_coverage=1 00:05:17.547 --rc genhtml_function_coverage=1 00:05:17.547 --rc genhtml_legend=1 00:05:17.547 --rc geninfo_all_blocks=1 00:05:17.547 --rc geninfo_unexecuted_blocks=1 00:05:17.547 00:05:17.547 ' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.547 --rc genhtml_branch_coverage=1 00:05:17.547 --rc genhtml_function_coverage=1 00:05:17.547 --rc genhtml_legend=1 00:05:17.547 --rc geninfo_all_blocks=1 00:05:17.547 --rc geninfo_unexecuted_blocks=1 00:05:17.547 00:05:17.547 ' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.547 --rc genhtml_branch_coverage=1 00:05:17.547 --rc genhtml_function_coverage=1 00:05:17.547 --rc genhtml_legend=1 00:05:17.547 --rc geninfo_all_blocks=1 00:05:17.547 --rc geninfo_unexecuted_blocks=1 00:05:17.547 00:05:17.547 ' 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.547 21:53:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.547 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.548 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.809 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:17.809 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:17.809 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.809 21:53:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:26.028 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:26.028 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:26.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.028 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:26.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:26.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:05:26.029 00:05:26.029 --- 10.0.0.2 ping statistics --- 00:05:26.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.029 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:05:26.029 00:05:26.029 --- 10.0.0.1 ping statistics --- 00:05:26.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.029 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3269971 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3269971 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3269971 ']' 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.029 21:53:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 [2024-10-12 21:53:43.611343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:26.029 [2024-10-12 21:53:43.611408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.029 [2024-10-12 21:53:43.702836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.029 [2024-10-12 21:53:43.753442] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.029 [2024-10-12 21:53:43.753499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.029 [2024-10-12 21:53:43.753508] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.029 [2024-10-12 21:53:43.753515] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.029 [2024-10-12 21:53:43.753522] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.029 [2024-10-12 21:53:43.753680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.029 [2024-10-12 21:53:43.753840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.029 [2024-10-12 21:53:43.753841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 [2024-10-12 21:53:44.471350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.029 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.290 Malloc0 00:05:26.290 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.291 Delay0 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.291 [2024-10-12 21:53:44.555664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.291 21:53:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:26.291 [2024-10-12 21:53:44.685752] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:28.837 Initializing NVMe Controllers 00:05:28.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:28.837 controller IO queue size 128 less than required 00:05:28.837 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:28.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:28.837 Initialization complete. Launching workers. 00:05:28.837 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28107 00:05:28.837 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28168, failed to submit 62 00:05:28.837 success 28111, unsuccessful 57, failed 0 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:28.837 rmmod nvme_tcp 00:05:28.837 rmmod nvme_fabrics 00:05:28.837 rmmod nvme_keyring 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3269971 ']' 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3269971 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3269971 ']' 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3269971 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269971 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269971' 00:05:28.837 killing process with pid 3269971 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3269971 00:05:28.837 21:53:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3269971 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:28.837 21:53:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:30.750 00:05:30.750 real 0m13.395s 00:05:30.750 user 0m14.051s 00:05:30.750 sys 0m6.629s 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.750 ************************************ 00:05:30.750 END TEST nvmf_abort 00:05:30.750 ************************************ 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.750 21:53:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:31.012 ************************************ 00:05:31.012 START TEST nvmf_ns_hotplug_stress 00:05:31.012 ************************************ 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:31.012 * Looking for test storage... 00:05:31.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.012 --rc genhtml_branch_coverage=1 00:05:31.012 --rc genhtml_function_coverage=1 00:05:31.012 --rc genhtml_legend=1 00:05:31.012 --rc geninfo_all_blocks=1 00:05:31.012 --rc geninfo_unexecuted_blocks=1 00:05:31.012 00:05:31.012 ' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.012 --rc genhtml_branch_coverage=1 00:05:31.012 --rc genhtml_function_coverage=1 00:05:31.012 --rc genhtml_legend=1 00:05:31.012 --rc geninfo_all_blocks=1 00:05:31.012 --rc geninfo_unexecuted_blocks=1 00:05:31.012 00:05:31.012 ' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.012 --rc genhtml_branch_coverage=1 00:05:31.012 --rc genhtml_function_coverage=1 00:05:31.012 --rc genhtml_legend=1 00:05:31.012 --rc geninfo_all_blocks=1 00:05:31.012 --rc geninfo_unexecuted_blocks=1 00:05:31.012 00:05:31.012 ' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.012 --rc genhtml_branch_coverage=1 00:05:31.012 --rc genhtml_function_coverage=1 00:05:31.012 --rc genhtml_legend=1 00:05:31.012 --rc geninfo_all_blocks=1 00:05:31.012 --rc geninfo_unexecuted_blocks=1 00:05:31.012 00:05:31.012 ' 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.012 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.013 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:31.273 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.274 21:53:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:39.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:39.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:39.412 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:39.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:39.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:39.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:05:39.413 00:05:39.413 --- 10.0.0.2 ping statistics --- 00:05:39.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.413 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:05:39.413 00:05:39.413 --- 10.0.0.1 ping statistics --- 00:05:39.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.413 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3275008 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3275008 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3275008 ']' 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.413 21:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.413 [2024-10-12 21:53:57.015068] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:39.413 [2024-10-12 21:53:57.015149] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.413 [2024-10-12 21:53:57.105053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.413 [2024-10-12 21:53:57.151751] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.413 [2024-10-12 21:53:57.151811] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.413 [2024-10-12 21:53:57.151819] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.413 [2024-10-12 21:53:57.151827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.413 [2024-10-12 21:53:57.151833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.413 [2024-10-12 21:53:57.151995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.413 [2024-10-12 21:53:57.152169] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.413 [2024-10-12 21:53:57.152183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:39.413 21:53:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:39.675 [2024-10-12 21:53:58.044031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.675 21:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:39.935 21:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:40.195 [2024-10-12 21:53:58.461154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.195 21:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:40.455 21:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:40.455 Malloc0 00:05:40.455 21:53:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:40.716 Delay0 00:05:40.716 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.977 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:40.977 NULL1 00:05:41.238 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:41.238 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:41.238 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3275594 00:05:41.238 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:41.238 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.498 21:53:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.758 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:41.758 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:41.758 true 00:05:41.758 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:41.758 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.018 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.278 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:42.278 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:42.278 true 00:05:42.540 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:42.540 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.540 21:54:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.800 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:42.800 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:43.060 true 00:05:43.061 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:43.061 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.061 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.321 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:43.321 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:43.581 true 00:05:43.581 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:43.581 21:54:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.581 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.841 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:43.841 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:44.102 true 00:05:44.102 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:44.102 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.362 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.362 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:44.362 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:44.622 true 00:05:44.622 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:44.622 21:54:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.882 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.882 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:44.882 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:45.142 true 00:05:45.142 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:45.142 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.403 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.663 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:45.663 21:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:45.663 true 00:05:45.663 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:45.663 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.923 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.184 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:46.184 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:46.184 true 00:05:46.184 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:46.184 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.445 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.705 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:46.705 21:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:46.705 true 00:05:46.705 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:46.705 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.965 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.226 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:47.226 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:47.226 true 00:05:47.486 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:47.486 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.486 21:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.747 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:47.747 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:48.008 true 00:05:48.008 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:48.008 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.008 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.269 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:48.269 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:48.529 true 00:05:48.529 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:48.529 21:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.529 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.790 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:48.790 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:49.050 true 00:05:49.050 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:49.050 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.310 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.310 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:49.310 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:49.571 true 00:05:49.571 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:49.571 21:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.832 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.832 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:49.832 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:50.092 true 00:05:50.092 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:50.092 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.352 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.612 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:50.612 21:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:50.612 true 00:05:50.612 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:50.612 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.873 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.134 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:51.134 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:51.134 true 00:05:51.134 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:51.134 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.396 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.657 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:51.657 21:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:51.657 true 00:05:51.918 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:51.918 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.918 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.180 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:52.180 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:52.444 true 00:05:52.444 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:52.444 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.444 21:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.704 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:52.704 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:52.965 true 00:05:52.965 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:52.965 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.226 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.226 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:53.226 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:53.487 true 00:05:53.487 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:53.487 21:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.748 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.748 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:53.748 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:54.008 true 00:05:54.008 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:54.008 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.269 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.530 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:54.530 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:54.530 true 00:05:54.530 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:54.530 21:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.791 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.051 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:55.051 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:55.051 true 00:05:55.051 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:55.051 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.312 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.572 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:55.572 21:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:55.833 true 00:05:55.833 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:55.833 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.833 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.094 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:56.094 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:56.354 true 00:05:56.354 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:56.354 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.615 21:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.615 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:56.615 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:56.875 true 00:05:56.875 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:56.875 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.135 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.135 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:57.135 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:57.395 true 00:05:57.395 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:57.395 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.655 21:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.655 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:57.655 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:57.916 true 00:05:57.916 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:57.916 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.176 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.437 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:58.437 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:58.437 true 00:05:58.437 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:58.437 21:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.697 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.956 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:58.956 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:58.956 true 00:05:58.956 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:58.956 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.216 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.476 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:59.476 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:59.736 true 00:05:59.736 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:05:59.736 21:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.736 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.995 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:59.995 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:00.254 true 00:06:00.254 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:00.255 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.255 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.514 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:00.514 21:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:00.774 true 00:06:00.774 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:00.774 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.035 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.035 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:01.035 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:01.295 true 00:06:01.295 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:01.295 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.555 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.555 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:01.555 21:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:01.814 true 00:06:01.814 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:01.814 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.074 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.074 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:02.074 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:02.334 true 00:06:02.334 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:02.334 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.594 21:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.854 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:02.854 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:02.854 true 00:06:02.854 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:02.854 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.115 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.375 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:03.375 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:03.375 true 00:06:03.375 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:03.375 21:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.636 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.896 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:03.896 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:03.896 true 00:06:03.896 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:03.896 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.158 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.418 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:04.418 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:04.418 true 00:06:04.678 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:04.678 21:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.678 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.939 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:04.939 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:05.199 true 00:06:05.199 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:05.199 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.199 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.459 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:05.459 21:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:05.719 true 00:06:05.719 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:05.719 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.979 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.979 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:05.979 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:06.240 true 00:06:06.240 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:06.240 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.501 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.501 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:06.501 21:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:06.764 true 00:06:06.764 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:06.764 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.043 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.379 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:07.379 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:07.379 true 00:06:07.379 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:07.379 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.641 21:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.641 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:07.641 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:07.901 true 00:06:07.901 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:07.901 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.161 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.422 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:08.422 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:08.422 true 00:06:08.422 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:08.422 21:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.682 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.942 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:08.942 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:08.942 true 00:06:08.942 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:08.942 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.203 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.463 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:09.463 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:09.463 true 00:06:09.723 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:09.723 21:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.723 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.983 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:09.983 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:10.244 true 00:06:10.244 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:10.244 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.244 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.504 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:10.504 21:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:10.765 true 00:06:10.765 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:10.765 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.025 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.025 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:11.025 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:11.287 true 00:06:11.287 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:11.287 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.549 21:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.549 Initializing NVMe Controllers 00:06:11.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:11.549 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:11.549 Controller IO queue size 128, less than required. 00:06:11.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:11.549 WARNING: Some requested NVMe devices were skipped 00:06:11.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:11.549 Initialization complete. Launching workers. 00:06:11.549 ======================================================== 00:06:11.549 Latency(us) 00:06:11.549 Device Information : IOPS MiB/s Average min max 00:06:11.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31199.57 15.23 4102.51 1197.66 10522.30 00:06:11.549 ======================================================== 00:06:11.549 Total : 31199.57 15.23 4102.51 1197.66 10522.30 00:06:11.549 00:06:11.549 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:11.549 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:11.810 true 00:06:11.810 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275594 00:06:11.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3275594) - No such process 00:06:11.810 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3275594 00:06:11.810 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.071 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.071 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:12.332 null0 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.332 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:12.593 null1 00:06:12.593 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.593 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.593 21:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:12.593 null2 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:12.854 null3 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:12.854 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:13.115 null4 00:06:13.115 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:13.115 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:13.115 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:13.115 null5 00:06:13.376 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:13.376 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:13.376 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:13.377 null6 00:06:13.377 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:13.377 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:13.377 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:13.639 null7 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.639 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3282824 3282826 3282827 3282829 3282831 3282833 3282835 3282837 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.640 21:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.901 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.161 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.161 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.161 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.162 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.424 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.686 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.948 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.209 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.210 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.471 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.733 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.733 21:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.733 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.994 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.256 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.517 21:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.778 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.779 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.040 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.301 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.562 rmmod nvme_tcp 00:06:17.562 rmmod nvme_fabrics 00:06:17.562 rmmod nvme_keyring 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3275008 ']' 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3275008 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3275008 ']' 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3275008 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3275008 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3275008' 00:06:17.562 killing process with pid 3275008 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3275008 00:06:17.562 21:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3275008 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.823 21:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.737 00:06:19.737 real 0m48.891s 00:06:19.737 user 3m19.356s 00:06:19.737 sys 0m17.325s 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.737 ************************************ 00:06:19.737 END TEST nvmf_ns_hotplug_stress 00:06:19.737 ************************************ 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.737 21:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.999 ************************************ 00:06:19.999 START TEST nvmf_delete_subsystem 00:06:19.999 ************************************ 00:06:19.999 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.999 * Looking for test storage... 00:06:19.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.999 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.999 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.999 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.999 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.000 --rc genhtml_branch_coverage=1 00:06:20.000 --rc genhtml_function_coverage=1 00:06:20.000 --rc genhtml_legend=1 00:06:20.000 --rc geninfo_all_blocks=1 00:06:20.000 --rc geninfo_unexecuted_blocks=1 00:06:20.000 00:06:20.000 ' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.000 --rc genhtml_branch_coverage=1 00:06:20.000 --rc genhtml_function_coverage=1 00:06:20.000 --rc genhtml_legend=1 00:06:20.000 --rc geninfo_all_blocks=1 00:06:20.000 --rc geninfo_unexecuted_blocks=1 00:06:20.000 00:06:20.000 ' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.000 --rc genhtml_branch_coverage=1 00:06:20.000 --rc genhtml_function_coverage=1 00:06:20.000 --rc genhtml_legend=1 00:06:20.000 --rc geninfo_all_blocks=1 00:06:20.000 --rc geninfo_unexecuted_blocks=1 00:06:20.000 00:06:20.000 ' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.000 --rc genhtml_branch_coverage=1 00:06:20.000 --rc genhtml_function_coverage=1 00:06:20.000 --rc genhtml_legend=1 00:06:20.000 --rc geninfo_all_blocks=1 00:06:20.000 --rc geninfo_unexecuted_blocks=1 00:06:20.000 00:06:20.000 ' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.000 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.001 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.262 21:54:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:28.408 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:28.408 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.408 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:28.408 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:28.409 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:28.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:06:28.409 00:06:28.409 --- 10.0.0.2 ping statistics --- 00:06:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.409 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:06:28.409 00:06:28.409 --- 10.0.0.1 ping statistics --- 00:06:28.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.409 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3288014 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3288014 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3288014 ']' 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.409 21:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.409 [2024-10-12 21:54:46.051640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:28.409 [2024-10-12 21:54:46.051706] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.409 [2024-10-12 21:54:46.138647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.409 [2024-10-12 21:54:46.184439] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.409 [2024-10-12 21:54:46.184491] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.409 [2024-10-12 21:54:46.184499] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.409 [2024-10-12 21:54:46.184506] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.409 [2024-10-12 21:54:46.184512] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.409 [2024-10-12 21:54:46.184665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.409 [2024-10-12 21:54:46.184668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.409 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.409 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:28.409 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:28.409 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.409 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 [2024-10-12 21:54:46.908537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 [2024-10-12 21:54:46.932803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 NULL1 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 Delay0 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3288329 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:28.671 21:54:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:28.671 [2024-10-12 21:54:47.049809] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:30.585 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.585 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.585 21:54:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.846 Write completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 Read completed with error (sct=0, sc=8) 00:06:30.846 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 [2024-10-12 21:54:49.304379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ea70 is same with the state(6) to be set 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:30.847 Write completed with error (sct=0, sc=8) 00:06:30.847 Read completed with error (sct=0, sc=8) 00:06:30.847 starting I/O failed: -6 00:06:31.789 [2024-10-12 21:54:50.274284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1db20 is same with the state(6) to be set 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 [2024-10-12 21:54:50.307683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ec50 is same with the state(6) to be set 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.049 Write completed with error (sct=0, sc=8) 00:06:32.049 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 [2024-10-12 21:54:50.308000] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf200b0 is same with the state(6) to be set 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 [2024-10-12 21:54:50.311524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f766c000c00 is same with the state(6) to be set 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Write completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 Read completed with error (sct=0, sc=8) 00:06:32.050 [2024-10-12 21:54:50.311906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f766c00d310 is same with the state(6) to be set 00:06:32.050 Initializing NVMe Controllers 00:06:32.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:32.050 Controller IO queue size 128, less than required. 00:06:32.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:32.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:32.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:32.050 Initialization complete. Launching workers. 00:06:32.050 ======================================================== 00:06:32.050 Latency(us) 00:06:32.050 Device Information : IOPS MiB/s Average min max 00:06:32.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.32 0.08 894321.85 341.18 1006250.76 00:06:32.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 180.28 0.09 960153.82 398.07 2001197.80 00:06:32.050 ======================================================== 00:06:32.050 Total : 349.60 0.17 928269.39 341.18 2001197.80 00:06:32.050 00:06:32.050 [2024-10-12 21:54:50.312357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1db20 (9): Bad file descriptor 00:06:32.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:32.050 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.050 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:32.050 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3288329 00:06:32.050 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3288329 00:06:32.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3288329) - No such process 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3288329 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3288329 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3288329 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 [2024-10-12 21:54:50.841586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3289041 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:32.622 21:54:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.622 [2024-10-12 21:54:50.931716] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:32.883 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.883 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:32.883 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.455 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.455 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:33.455 21:54:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.026 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.026 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:34.026 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.597 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.597 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:34.597 21:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.169 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.169 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:35.169 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.430 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.430 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:35.430 21:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.690 Initializing NVMe Controllers 00:06:35.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.690 Controller IO queue size 128, less than required. 00:06:35.690 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:35.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:35.690 Initialization complete. Launching workers. 00:06:35.690 ======================================================== 00:06:35.690 Latency(us) 00:06:35.690 Device Information : IOPS MiB/s Average min max 00:06:35.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001897.14 1000102.68 1005078.21 00:06:35.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002922.26 1000169.52 1007581.12 00:06:35.690 ======================================================== 00:06:35.690 Total : 256.00 0.12 1002409.70 1000102.68 1007581.12 00:06:35.690 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3289041 00:06:35.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3289041) - No such process 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3289041 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.951 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.951 rmmod nvme_tcp 00:06:35.951 rmmod nvme_fabrics 00:06:35.951 rmmod nvme_keyring 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3288014 ']' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3288014 ']' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3288014' 00:06:36.212 killing process with pid 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3288014 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.212 21:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.762 00:06:38.762 real 0m18.473s 00:06:38.762 user 0m31.167s 00:06:38.762 sys 0m6.718s 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 ************************************ 00:06:38.762 END TEST nvmf_delete_subsystem 00:06:38.762 ************************************ 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.762 ************************************ 00:06:38.762 START TEST nvmf_host_management 00:06:38.762 ************************************ 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.762 * Looking for test storage... 00:06:38.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:38.762 21:54:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.762 --rc genhtml_branch_coverage=1 00:06:38.762 --rc genhtml_function_coverage=1 00:06:38.762 --rc genhtml_legend=1 00:06:38.762 --rc geninfo_all_blocks=1 00:06:38.762 --rc geninfo_unexecuted_blocks=1 00:06:38.762 00:06:38.762 ' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.762 --rc genhtml_branch_coverage=1 00:06:38.762 --rc genhtml_function_coverage=1 00:06:38.762 --rc genhtml_legend=1 00:06:38.762 --rc geninfo_all_blocks=1 00:06:38.762 --rc geninfo_unexecuted_blocks=1 00:06:38.762 00:06:38.762 ' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.762 --rc genhtml_branch_coverage=1 00:06:38.762 --rc genhtml_function_coverage=1 00:06:38.762 --rc genhtml_legend=1 00:06:38.762 --rc geninfo_all_blocks=1 00:06:38.762 --rc geninfo_unexecuted_blocks=1 00:06:38.762 00:06:38.762 ' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.762 --rc genhtml_branch_coverage=1 00:06:38.762 --rc genhtml_function_coverage=1 00:06:38.762 --rc genhtml_legend=1 00:06:38.762 --rc geninfo_all_blocks=1 00:06:38.762 --rc geninfo_unexecuted_blocks=1 00:06:38.762 00:06:38.762 ' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.762 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.763 21:54:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:46.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:46.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:46.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:46.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.909 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:06:46.909 00:06:46.909 --- 10.0.0.2 ping statistics --- 00:06:46.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.910 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:06:46.910 00:06:46.910 --- 10.0.0.1 ping statistics --- 00:06:46.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.910 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3294084 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3294084 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3294084 ']' 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.910 21:55:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 [2024-10-12 21:55:04.652047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.910 [2024-10-12 21:55:04.652124] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.910 [2024-10-12 21:55:04.740995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.910 [2024-10-12 21:55:04.789864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.910 [2024-10-12 21:55:04.789926] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.910 [2024-10-12 21:55:04.789935] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.910 [2024-10-12 21:55:04.789943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.910 [2024-10-12 21:55:04.789950] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.910 [2024-10-12 21:55:04.790184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.910 [2024-10-12 21:55:04.790353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.910 [2024-10-12 21:55:04.790511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.910 [2024-10-12 21:55:04.790511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.171 [2024-10-12 21:55:05.535196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.171 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.172 Malloc0 00:06:47.172 [2024-10-12 21:55:05.604616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:47.172 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3294311 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3294311 /var/tmp/bdevperf.sock 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3294311 ']' 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:47.433 { 00:06:47.433 "params": { 00:06:47.433 "name": "Nvme$subsystem", 00:06:47.433 "trtype": "$TEST_TRANSPORT", 00:06:47.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:47.433 "adrfam": "ipv4", 00:06:47.433 "trsvcid": "$NVMF_PORT", 00:06:47.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:47.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:47.433 "hdgst": ${hdgst:-false}, 00:06:47.433 "ddgst": ${ddgst:-false} 00:06:47.433 }, 00:06:47.433 "method": "bdev_nvme_attach_controller" 00:06:47.433 } 00:06:47.433 EOF 00:06:47.433 )") 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:47.433 21:55:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:47.433 "params": { 00:06:47.433 "name": "Nvme0", 00:06:47.433 "trtype": "tcp", 00:06:47.433 "traddr": "10.0.0.2", 00:06:47.433 "adrfam": "ipv4", 00:06:47.433 "trsvcid": "4420", 00:06:47.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:47.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:47.433 "hdgst": false, 00:06:47.433 "ddgst": false 00:06:47.433 }, 00:06:47.433 "method": "bdev_nvme_attach_controller" 00:06:47.433 }' 00:06:47.433 [2024-10-12 21:55:05.714074] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.433 [2024-10-12 21:55:05.714150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294311 ] 00:06:47.434 [2024-10-12 21:55:05.798285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.434 [2024-10-12 21:55:05.846036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.695 Running I/O for 10 seconds... 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=776 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 776 -ge 100 ']' 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.269 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.269 [2024-10-12 21:55:06.628675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb56b0 is same with the state(6) to be set 00:06:48.269 [2024-10-12 21:55:06.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.269 [2024-10-12 21:55:06.629487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.269 [2024-10-12 21:55:06.629496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.629982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.629991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:48.270 [2024-10-12 21:55:06.630235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.270 [2024-10-12 21:55:06.630244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e1f20 is same with the state(6) to be set 00:06:48.271 [2024-10-12 21:55:06.630313] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8e1f20 was disconnected and freed. reset controller. 00:06:48.271 [2024-10-12 21:55:06.631581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:48.271 task offset: 114304 on job bdev=Nvme0n1 fails 00:06:48.271 00:06:48.271 Latency(us) 00:06:48.271 [2024-10-12T19:55:06.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:48.271 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:48.271 Job: Nvme0n1 ended in about 0.59 seconds with error 00:06:48.271 Verification LBA range: start 0x0 length 0x400 00:06:48.271 Nvme0n1 : 0.59 1425.63 89.10 109.01 0.00 40725.81 2007.04 38010.88 00:06:48.271 [2024-10-12T19:55:06.760Z] =================================================================================================================== 00:06:48.271 [2024-10-12T19:55:06.760Z] Total : 1425.63 89.10 109.01 0.00 40725.81 2007.04 38010.88 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.271 [2024-10-12 21:55:06.633861] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.271 [2024-10-12 21:55:06.633904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8ed0 (9): Bad file descriptor 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:48.271 [2024-10-12 21:55:06.637951] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:48.271 [2024-10-12 21:55:06.638063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:48.271 [2024-10-12 21:55:06.638092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:48.271 [2024-10-12 21:55:06.638113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:48.271 [2024-10-12 21:55:06.638124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:48.271 [2024-10-12 21:55:06.638132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:48.271 [2024-10-12 21:55:06.638140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6c8ed0 00:06:48.271 [2024-10-12 21:55:06.638163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8ed0 (9): Bad file descriptor 00:06:48.271 [2024-10-12 21:55:06.638177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:06:48.271 [2024-10-12 21:55:06.638186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:06:48.271 [2024-10-12 21:55:06.638197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:48.271 [2024-10-12 21:55:06.638212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.271 21:55:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3294311 00:06:49.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3294311) - No such process 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:49.215 { 00:06:49.215 "params": { 00:06:49.215 "name": "Nvme$subsystem", 00:06:49.215 "trtype": "$TEST_TRANSPORT", 00:06:49.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:49.215 "adrfam": "ipv4", 00:06:49.215 "trsvcid": "$NVMF_PORT", 00:06:49.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:49.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:49.215 "hdgst": ${hdgst:-false}, 00:06:49.215 "ddgst": ${ddgst:-false} 00:06:49.215 }, 00:06:49.215 "method": "bdev_nvme_attach_controller" 00:06:49.215 } 00:06:49.215 EOF 00:06:49.215 )") 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:49.215 21:55:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:49.215 "params": { 00:06:49.215 "name": "Nvme0", 00:06:49.215 "trtype": "tcp", 00:06:49.215 "traddr": "10.0.0.2", 00:06:49.215 "adrfam": "ipv4", 00:06:49.215 "trsvcid": "4420", 00:06:49.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.215 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:49.215 "hdgst": false, 00:06:49.215 "ddgst": false 00:06:49.215 }, 00:06:49.215 "method": "bdev_nvme_attach_controller" 00:06:49.215 }' 00:06:49.476 [2024-10-12 21:55:07.706310] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.476 [2024-10-12 21:55:07.706365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294814 ] 00:06:49.476 [2024-10-12 21:55:07.782950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.476 [2024-10-12 21:55:07.812249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.736 Running I/O for 1 seconds... 00:06:50.678 1662.00 IOPS, 103.88 MiB/s 00:06:50.678 Latency(us) 00:06:50.678 [2024-10-12T19:55:09.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.678 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:50.678 Verification LBA range: start 0x0 length 0x400 00:06:50.678 Nvme0n1 : 1.02 1690.81 105.68 0.00 0.00 37131.84 5543.25 33423.36 00:06:50.678 [2024-10-12T19:55:09.167Z] =================================================================================================================== 00:06:50.678 [2024-10-12T19:55:09.167Z] Total : 1690.81 105.68 0.00 0.00 37131.84 5543.25 33423.36 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:50.939 rmmod nvme_tcp 00:06:50.939 rmmod nvme_fabrics 00:06:50.939 rmmod nvme_keyring 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3294084 ']' 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3294084 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3294084 ']' 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3294084 00:06:50.939 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3294084 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3294084' 00:06:50.940 killing process with pid 3294084 00:06:50.940 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3294084 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3294084 00:06:51.200 [2024-10-12 21:55:09.528183] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:51.200 21:55:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:53.750 00:06:53.750 real 0m14.827s 00:06:53.750 user 0m23.773s 00:06:53.750 sys 0m6.854s 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.750 ************************************ 00:06:53.750 END TEST nvmf_host_management 00:06:53.750 ************************************ 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:53.750 ************************************ 00:06:53.750 START TEST nvmf_lvol 00:06:53.750 ************************************ 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:53.750 * Looking for test storage... 00:06:53.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.750 --rc genhtml_branch_coverage=1 00:06:53.750 --rc genhtml_function_coverage=1 00:06:53.750 --rc genhtml_legend=1 00:06:53.750 --rc geninfo_all_blocks=1 00:06:53.750 --rc geninfo_unexecuted_blocks=1 00:06:53.750 00:06:53.750 ' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.750 --rc genhtml_branch_coverage=1 00:06:53.750 --rc genhtml_function_coverage=1 00:06:53.750 --rc genhtml_legend=1 00:06:53.750 --rc geninfo_all_blocks=1 00:06:53.750 --rc geninfo_unexecuted_blocks=1 00:06:53.750 00:06:53.750 ' 00:06:53.750 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.750 --rc genhtml_branch_coverage=1 00:06:53.750 --rc genhtml_function_coverage=1 00:06:53.750 --rc genhtml_legend=1 00:06:53.751 --rc geninfo_all_blocks=1 00:06:53.751 --rc geninfo_unexecuted_blocks=1 00:06:53.751 00:06:53.751 ' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.751 --rc genhtml_branch_coverage=1 00:06:53.751 --rc genhtml_function_coverage=1 00:06:53.751 --rc genhtml_legend=1 00:06:53.751 --rc geninfo_all_blocks=1 00:06:53.751 --rc geninfo_unexecuted_blocks=1 00:06:53.751 00:06:53.751 ' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:53.751 21:55:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:01.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:01.900 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:01.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:01.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:01.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:01.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:07:01.901 00:07:01.901 --- 10.0.0.2 ping statistics --- 00:07:01.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.901 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:07:01.901 00:07:01.901 --- 10.0.0.1 ping statistics --- 00:07:01.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.901 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3299358 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3299358 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3299358 ']' 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.901 21:55:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.901 [2024-10-12 21:55:19.539087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.901 [2024-10-12 21:55:19.539162] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.901 [2024-10-12 21:55:19.627893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.901 [2024-10-12 21:55:19.675484] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.901 [2024-10-12 21:55:19.675538] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.901 [2024-10-12 21:55:19.675547] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.901 [2024-10-12 21:55:19.675554] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.901 [2024-10-12 21:55:19.675560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.901 [2024-10-12 21:55:19.675715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.901 [2024-10-12 21:55:19.675871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.901 [2024-10-12 21:55:19.675872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.901 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.901 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:01.901 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:01.901 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.901 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:02.163 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.163 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.163 [2024-10-12 21:55:20.571857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.163 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:02.424 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:02.424 21:55:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:02.685 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:02.685 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:02.946 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:03.207 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e13accdb-6384-452c-abcf-7ae9b70ab8d1 00:07:03.207 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e13accdb-6384-452c-abcf-7ae9b70ab8d1 lvol 20 00:07:03.207 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9b752afd-89c9-4ca1-9ff1-a73fd3a8efc2 00:07:03.207 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.469 21:55:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b752afd-89c9-4ca1-9ff1-a73fd3a8efc2 00:07:03.729 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.999 [2024-10-12 21:55:22.242100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.999 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.999 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3299885 00:07:03.999 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:03.999 21:55:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:05.066 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9b752afd-89c9-4ca1-9ff1-a73fd3a8efc2 MY_SNAPSHOT 00:07:05.327 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c4bb2b7b-c2aa-4ce9-af74-faf001db0126 00:07:05.327 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9b752afd-89c9-4ca1-9ff1-a73fd3a8efc2 30 00:07:05.588 21:55:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c4bb2b7b-c2aa-4ce9-af74-faf001db0126 MY_CLONE 00:07:05.849 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fef43199-26ba-4750-b8b3-716cc102be34 00:07:05.849 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fef43199-26ba-4750-b8b3-716cc102be34 00:07:06.110 21:55:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3299885 00:07:16.115 Initializing NVMe Controllers 00:07:16.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:16.116 Controller IO queue size 128, less than required. 00:07:16.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:16.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:16.116 Initialization complete. Launching workers. 00:07:16.116 ======================================================== 00:07:16.116 Latency(us) 00:07:16.116 Device Information : IOPS MiB/s Average min max 00:07:16.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16736.10 65.38 7649.61 1520.75 45863.84 00:07:16.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17071.00 66.68 7498.77 780.09 61684.67 00:07:16.116 ======================================================== 00:07:16.116 Total : 33807.10 132.06 7573.45 780.09 61684.67 00:07:16.116 00:07:16.116 21:55:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b752afd-89c9-4ca1-9ff1-a73fd3a8efc2 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e13accdb-6384-452c-abcf-7ae9b70ab8d1 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.116 rmmod nvme_tcp 00:07:16.116 rmmod nvme_fabrics 00:07:16.116 rmmod nvme_keyring 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3299358 ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3299358 ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3299358' 00:07:16.116 killing process with pid 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3299358 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.116 21:55:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:17.502 00:07:17.502 real 0m24.115s 00:07:17.502 user 1m5.380s 00:07:17.502 sys 0m8.675s 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:17.502 ************************************ 00:07:17.502 END TEST nvmf_lvol 00:07:17.502 ************************************ 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:17.502 ************************************ 00:07:17.502 START TEST nvmf_lvs_grow 00:07:17.502 ************************************ 00:07:17.502 21:55:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:17.764 * Looking for test storage... 00:07:17.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.764 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:17.764 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.765 --rc genhtml_branch_coverage=1 00:07:17.765 --rc genhtml_function_coverage=1 00:07:17.765 --rc genhtml_legend=1 00:07:17.765 --rc geninfo_all_blocks=1 00:07:17.765 --rc geninfo_unexecuted_blocks=1 00:07:17.765 00:07:17.765 ' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.765 --rc genhtml_branch_coverage=1 00:07:17.765 --rc genhtml_function_coverage=1 00:07:17.765 --rc genhtml_legend=1 00:07:17.765 --rc geninfo_all_blocks=1 00:07:17.765 --rc geninfo_unexecuted_blocks=1 00:07:17.765 00:07:17.765 ' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.765 --rc genhtml_branch_coverage=1 00:07:17.765 --rc genhtml_function_coverage=1 00:07:17.765 --rc genhtml_legend=1 00:07:17.765 --rc geninfo_all_blocks=1 00:07:17.765 --rc geninfo_unexecuted_blocks=1 00:07:17.765 00:07:17.765 ' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:17.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.765 --rc genhtml_branch_coverage=1 00:07:17.765 --rc genhtml_function_coverage=1 00:07:17.765 --rc genhtml_legend=1 00:07:17.765 --rc geninfo_all_blocks=1 00:07:17.765 --rc geninfo_unexecuted_blocks=1 00:07:17.765 00:07:17.765 ' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:17.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:17.765 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:17.766 21:55:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.906 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:25.907 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:25.907 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:25.907 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:25.907 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:07:25.907 00:07:25.907 --- 10.0.0.2 ping statistics --- 00:07:25.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.907 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:07:25.907 00:07:25.907 --- 10.0.0.1 ping statistics --- 00:07:25.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.907 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3306549 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3306549 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3306549 ']' 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.907 21:55:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.907 [2024-10-12 21:55:43.649079] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.907 [2024-10-12 21:55:43.649143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.907 [2024-10-12 21:55:43.732947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.907 [2024-10-12 21:55:43.763970] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.907 [2024-10-12 21:55:43.764009] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.907 [2024-10-12 21:55:43.764017] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.907 [2024-10-12 21:55:43.764023] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.907 [2024-10-12 21:55:43.764029] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.907 [2024-10-12 21:55:43.764046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.168 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:26.168 [2024-10-12 21:55:44.638489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:26.430 ************************************ 00:07:26.430 START TEST lvs_grow_clean 00:07:26.430 ************************************ 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.430 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.691 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:26.691 21:55:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:26.691 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:26.691 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:26.691 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:26.954 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:26.954 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:26.954 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 lvol 150 00:07:27.215 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4d7d7fbb-7537-4691-9990-c15258321539 00:07:27.215 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.215 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:27.215 [2024-10-12 21:55:45.640576] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:27.215 [2024-10-12 21:55:45.640643] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:27.215 true 00:07:27.215 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:27.215 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:27.476 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.476 21:55:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.736 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d7d7fbb-7537-4691-9990-c15258321539 00:07:27.997 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.997 [2024-10-12 21:55:46.398952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.997 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3307062 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3307062 /var/tmp/bdevperf.sock 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3307062 ']' 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.259 21:55:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.259 [2024-10-12 21:55:46.655408] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:28.259 [2024-10-12 21:55:46.655477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3307062 ] 00:07:28.259 [2024-10-12 21:55:46.737012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.521 [2024-10-12 21:55:46.784721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.093 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.093 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:29.093 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.666 Nvme0n1 00:07:29.666 21:55:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.666 [ 00:07:29.666 { 00:07:29.666 "name": "Nvme0n1", 00:07:29.666 "aliases": [ 00:07:29.666 "4d7d7fbb-7537-4691-9990-c15258321539" 00:07:29.666 ], 00:07:29.666 "product_name": "NVMe disk", 00:07:29.666 "block_size": 4096, 00:07:29.666 "num_blocks": 38912, 00:07:29.666 "uuid": "4d7d7fbb-7537-4691-9990-c15258321539", 00:07:29.666 "numa_id": 0, 00:07:29.666 "assigned_rate_limits": { 00:07:29.666 "rw_ios_per_sec": 0, 00:07:29.666 "rw_mbytes_per_sec": 0, 00:07:29.666 "r_mbytes_per_sec": 0, 00:07:29.666 "w_mbytes_per_sec": 0 00:07:29.666 }, 00:07:29.666 "claimed": false, 00:07:29.666 "zoned": false, 00:07:29.666 "supported_io_types": { 00:07:29.666 "read": true, 00:07:29.666 "write": true, 00:07:29.666 "unmap": true, 00:07:29.666 "flush": true, 00:07:29.666 "reset": true, 00:07:29.666 "nvme_admin": true, 00:07:29.666 "nvme_io": true, 00:07:29.666 "nvme_io_md": false, 00:07:29.666 "write_zeroes": true, 00:07:29.666 "zcopy": false, 00:07:29.666 "get_zone_info": false, 00:07:29.666 "zone_management": false, 00:07:29.666 "zone_append": false, 00:07:29.666 "compare": true, 00:07:29.666 "compare_and_write": true, 00:07:29.666 "abort": true, 00:07:29.666 "seek_hole": false, 00:07:29.666 "seek_data": false, 00:07:29.666 "copy": true, 00:07:29.666 "nvme_iov_md": false 00:07:29.666 }, 00:07:29.666 "memory_domains": [ 00:07:29.666 { 00:07:29.666 "dma_device_id": "system", 00:07:29.666 "dma_device_type": 1 00:07:29.666 } 00:07:29.666 ], 00:07:29.666 "driver_specific": { 00:07:29.666 "nvme": [ 00:07:29.666 { 00:07:29.666 "trid": { 00:07:29.666 "trtype": "TCP", 00:07:29.666 "adrfam": "IPv4", 00:07:29.666 "traddr": "10.0.0.2", 00:07:29.666 "trsvcid": "4420", 00:07:29.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.666 }, 00:07:29.666 "ctrlr_data": { 00:07:29.666 "cntlid": 1, 00:07:29.666 "vendor_id": "0x8086", 00:07:29.666 "model_number": "SPDK bdev Controller", 00:07:29.666 "serial_number": "SPDK0", 00:07:29.666 "firmware_revision": "24.09.1", 00:07:29.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.666 "oacs": { 00:07:29.666 "security": 0, 00:07:29.666 "format": 0, 00:07:29.666 "firmware": 0, 00:07:29.666 "ns_manage": 0 00:07:29.666 }, 00:07:29.666 "multi_ctrlr": true, 00:07:29.666 "ana_reporting": false 00:07:29.666 }, 00:07:29.666 "vs": { 00:07:29.666 "nvme_version": "1.3" 00:07:29.666 }, 00:07:29.666 "ns_data": { 00:07:29.666 "id": 1, 00:07:29.666 "can_share": true 00:07:29.666 } 00:07:29.666 } 00:07:29.666 ], 00:07:29.666 "mp_policy": "active_passive" 00:07:29.666 } 00:07:29.666 } 00:07:29.666 ] 00:07:29.666 21:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3307316 00:07:29.666 21:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.666 21:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.927 Running I/O for 10 seconds... 00:07:30.870 Latency(us) 00:07:30.870 [2024-10-12T19:55:49.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.870 Nvme0n1 : 1.00 25087.00 98.00 0.00 0.00 0.00 0.00 0.00 00:07:30.870 [2024-10-12T19:55:49.359Z] =================================================================================================================== 00:07:30.870 [2024-10-12T19:55:49.359Z] Total : 25087.00 98.00 0.00 0.00 0.00 0.00 0.00 00:07:30.870 00:07:31.812 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:31.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.812 Nvme0n1 : 2.00 25230.00 98.55 0.00 0.00 0.00 0.00 0.00 00:07:31.812 [2024-10-12T19:55:50.301Z] =================================================================================================================== 00:07:31.812 [2024-10-12T19:55:50.301Z] Total : 25230.00 98.55 0.00 0.00 0.00 0.00 0.00 00:07:31.812 00:07:31.812 true 00:07:31.812 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:31.812 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:32.073 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:32.073 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:32.073 21:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3307316 00:07:33.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.015 Nvme0n1 : 3.00 25304.33 98.85 0.00 0.00 0.00 0.00 0.00 00:07:33.015 [2024-10-12T19:55:51.504Z] =================================================================================================================== 00:07:33.015 [2024-10-12T19:55:51.504Z] Total : 25304.33 98.85 0.00 0.00 0.00 0.00 0.00 00:07:33.015 00:07:33.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.958 Nvme0n1 : 4.00 25346.75 99.01 0.00 0.00 0.00 0.00 0.00 00:07:33.958 [2024-10-12T19:55:52.447Z] =================================================================================================================== 00:07:33.958 [2024-10-12T19:55:52.447Z] Total : 25346.75 99.01 0.00 0.00 0.00 0.00 0.00 00:07:33.958 00:07:34.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.901 Nvme0n1 : 5.00 25380.60 99.14 0.00 0.00 0.00 0.00 0.00 00:07:34.901 [2024-10-12T19:55:53.390Z] =================================================================================================================== 00:07:34.901 [2024-10-12T19:55:53.390Z] Total : 25380.60 99.14 0.00 0.00 0.00 0.00 0.00 00:07:34.901 00:07:35.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.844 Nvme0n1 : 6.00 25406.00 99.24 0.00 0.00 0.00 0.00 0.00 00:07:35.844 [2024-10-12T19:55:54.333Z] =================================================================================================================== 00:07:35.844 [2024-10-12T19:55:54.333Z] Total : 25406.00 99.24 0.00 0.00 0.00 0.00 0.00 00:07:35.844 00:07:36.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.787 Nvme0n1 : 7.00 25424.43 99.31 0.00 0.00 0.00 0.00 0.00 00:07:36.787 [2024-10-12T19:55:55.276Z] =================================================================================================================== 00:07:36.787 [2024-10-12T19:55:55.276Z] Total : 25424.43 99.31 0.00 0.00 0.00 0.00 0.00 00:07:36.787 00:07:37.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.730 Nvme0n1 : 8.00 25442.75 99.39 0.00 0.00 0.00 0.00 0.00 00:07:37.730 [2024-10-12T19:55:56.219Z] =================================================================================================================== 00:07:37.730 [2024-10-12T19:55:56.219Z] Total : 25442.75 99.39 0.00 0.00 0.00 0.00 0.00 00:07:37.730 00:07:39.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.116 Nvme0n1 : 9.00 25456.22 99.44 0.00 0.00 0.00 0.00 0.00 00:07:39.116 [2024-10-12T19:55:57.605Z] =================================================================================================================== 00:07:39.116 [2024-10-12T19:55:57.605Z] Total : 25456.22 99.44 0.00 0.00 0.00 0.00 0.00 00:07:39.116 00:07:40.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.057 Nvme0n1 : 10.00 25470.10 99.49 0.00 0.00 0.00 0.00 0.00 00:07:40.057 [2024-10-12T19:55:58.546Z] =================================================================================================================== 00:07:40.057 [2024-10-12T19:55:58.546Z] Total : 25470.10 99.49 0.00 0.00 0.00 0.00 0.00 00:07:40.057 00:07:40.057 00:07:40.057 Latency(us) 00:07:40.057 [2024-10-12T19:55:58.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.057 Nvme0n1 : 10.00 25468.82 99.49 0.00 0.00 5022.28 2498.56 11359.57 00:07:40.057 [2024-10-12T19:55:58.546Z] =================================================================================================================== 00:07:40.057 [2024-10-12T19:55:58.546Z] Total : 25468.82 99.49 0.00 0.00 5022.28 2498.56 11359.57 00:07:40.057 { 00:07:40.057 "results": [ 00:07:40.057 { 00:07:40.057 "job": "Nvme0n1", 00:07:40.057 "core_mask": "0x2", 00:07:40.057 "workload": "randwrite", 00:07:40.057 "status": "finished", 00:07:40.057 "queue_depth": 128, 00:07:40.057 "io_size": 4096, 00:07:40.057 "runtime": 10.003055, 00:07:40.057 "iops": 25468.81927571127, 00:07:40.057 "mibps": 99.48757529574715, 00:07:40.057 "io_failed": 0, 00:07:40.057 "io_timeout": 0, 00:07:40.057 "avg_latency_us": 5022.279623916326, 00:07:40.057 "min_latency_us": 2498.56, 00:07:40.057 "max_latency_us": 11359.573333333334 00:07:40.057 } 00:07:40.057 ], 00:07:40.057 "core_count": 1 00:07:40.057 } 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3307062 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3307062 ']' 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3307062 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3307062 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3307062' 00:07:40.057 killing process with pid 3307062 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3307062 00:07:40.057 Received shutdown signal, test time was about 10.000000 seconds 00:07:40.057 00:07:40.057 Latency(us) 00:07:40.057 [2024-10-12T19:55:58.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.057 [2024-10-12T19:55:58.546Z] =================================================================================================================== 00:07:40.057 [2024-10-12T19:55:58.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3307062 00:07:40.057 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.318 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.318 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:40.318 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.579 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.579 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:40.579 21:55:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.840 [2024-10-12 21:55:59.107598] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:40.840 request: 00:07:40.840 { 00:07:40.840 "uuid": "8b8e0ca8-8d61-4431-8821-612b2f3256d5", 00:07:40.840 "method": "bdev_lvol_get_lvstores", 00:07:40.840 "req_id": 1 00:07:40.840 } 00:07:40.840 Got JSON-RPC error response 00:07:40.840 response: 00:07:40.840 { 00:07:40.840 "code": -19, 00:07:40.840 "message": "No such device" 00:07:40.840 } 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.840 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.102 aio_bdev 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4d7d7fbb-7537-4691-9990-c15258321539 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4d7d7fbb-7537-4691-9990-c15258321539 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.102 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.363 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4d7d7fbb-7537-4691-9990-c15258321539 -t 2000 00:07:41.363 [ 00:07:41.363 { 00:07:41.363 "name": "4d7d7fbb-7537-4691-9990-c15258321539", 00:07:41.363 "aliases": [ 00:07:41.363 "lvs/lvol" 00:07:41.363 ], 00:07:41.363 "product_name": "Logical Volume", 00:07:41.363 "block_size": 4096, 00:07:41.363 "num_blocks": 38912, 00:07:41.363 "uuid": "4d7d7fbb-7537-4691-9990-c15258321539", 00:07:41.363 "assigned_rate_limits": { 00:07:41.363 "rw_ios_per_sec": 0, 00:07:41.363 "rw_mbytes_per_sec": 0, 00:07:41.363 "r_mbytes_per_sec": 0, 00:07:41.363 "w_mbytes_per_sec": 0 00:07:41.363 }, 00:07:41.363 "claimed": false, 00:07:41.363 "zoned": false, 00:07:41.363 "supported_io_types": { 00:07:41.363 "read": true, 00:07:41.363 "write": true, 00:07:41.363 "unmap": true, 00:07:41.363 "flush": false, 00:07:41.363 "reset": true, 00:07:41.363 "nvme_admin": false, 00:07:41.363 "nvme_io": false, 00:07:41.363 "nvme_io_md": false, 00:07:41.363 "write_zeroes": true, 00:07:41.363 "zcopy": false, 00:07:41.363 "get_zone_info": false, 00:07:41.363 "zone_management": false, 00:07:41.363 "zone_append": false, 00:07:41.363 "compare": false, 00:07:41.363 "compare_and_write": false, 00:07:41.363 "abort": false, 00:07:41.363 "seek_hole": true, 00:07:41.363 "seek_data": true, 00:07:41.363 "copy": false, 00:07:41.363 "nvme_iov_md": false 00:07:41.363 }, 00:07:41.363 "driver_specific": { 00:07:41.363 "lvol": { 00:07:41.363 "lvol_store_uuid": "8b8e0ca8-8d61-4431-8821-612b2f3256d5", 00:07:41.363 "base_bdev": "aio_bdev", 00:07:41.363 "thin_provision": false, 00:07:41.363 "num_allocated_clusters": 38, 00:07:41.363 "snapshot": false, 00:07:41.363 "clone": false, 00:07:41.363 "esnap_clone": false 00:07:41.363 } 00:07:41.363 } 00:07:41.363 } 00:07:41.363 ] 00:07:41.363 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:41.363 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:41.363 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:41.623 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:41.623 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:41.623 21:55:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:41.884 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:41.885 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4d7d7fbb-7537-4691-9990-c15258321539 00:07:41.885 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b8e0ca8-8d61-4431-8821-612b2f3256d5 00:07:42.146 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.407 00:07:42.407 real 0m15.980s 00:07:42.407 user 0m15.772s 00:07:42.407 sys 0m1.361s 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:42.407 ************************************ 00:07:42.407 END TEST lvs_grow_clean 00:07:42.407 ************************************ 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.407 ************************************ 00:07:42.407 START TEST lvs_grow_dirty 00:07:42.407 ************************************ 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.407 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:42.668 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:42.668 21:56:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:42.668 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:42.668 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:42.668 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:42.928 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:42.928 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:42.928 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 97284c75-d3fe-409f-8d7a-074d05e1c848 lvol 150 00:07:43.189 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=96ecb626-48cd-495f-9367-c68f974ca840 00:07:43.189 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:43.189 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:43.189 [2024-10-12 21:56:01.645645] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:43.189 [2024-10-12 21:56:01.645686] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:43.189 true 00:07:43.189 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:43.189 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:43.449 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:43.449 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:43.710 21:56:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96ecb626-48cd-495f-9367-c68f974ca840 00:07:43.710 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:43.970 [2024-10-12 21:56:02.295518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.970 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3310388 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3310388 /var/tmp/bdevperf.sock 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3310388 ']' 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.231 21:56:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.231 [2024-10-12 21:56:02.546265] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.231 [2024-10-12 21:56:02.546319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3310388 ] 00:07:44.231 [2024-10-12 21:56:02.624203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.231 [2024-10-12 21:56:02.652738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.173 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.173 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:45.173 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:45.173 Nvme0n1 00:07:45.173 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:45.435 [ 00:07:45.435 { 00:07:45.435 "name": "Nvme0n1", 00:07:45.435 "aliases": [ 00:07:45.435 "96ecb626-48cd-495f-9367-c68f974ca840" 00:07:45.435 ], 00:07:45.435 "product_name": "NVMe disk", 00:07:45.435 "block_size": 4096, 00:07:45.435 "num_blocks": 38912, 00:07:45.435 "uuid": "96ecb626-48cd-495f-9367-c68f974ca840", 00:07:45.435 "numa_id": 0, 00:07:45.435 "assigned_rate_limits": { 00:07:45.435 "rw_ios_per_sec": 0, 00:07:45.435 "rw_mbytes_per_sec": 0, 00:07:45.435 "r_mbytes_per_sec": 0, 00:07:45.435 "w_mbytes_per_sec": 0 00:07:45.435 }, 00:07:45.435 "claimed": false, 00:07:45.435 "zoned": false, 00:07:45.435 "supported_io_types": { 00:07:45.435 "read": true, 00:07:45.435 "write": true, 00:07:45.435 "unmap": true, 00:07:45.435 "flush": true, 00:07:45.435 "reset": true, 00:07:45.435 "nvme_admin": true, 00:07:45.435 "nvme_io": true, 00:07:45.435 "nvme_io_md": false, 00:07:45.435 "write_zeroes": true, 00:07:45.435 "zcopy": false, 00:07:45.435 "get_zone_info": false, 00:07:45.435 "zone_management": false, 00:07:45.435 "zone_append": false, 00:07:45.435 "compare": true, 00:07:45.435 "compare_and_write": true, 00:07:45.435 "abort": true, 00:07:45.435 "seek_hole": false, 00:07:45.435 "seek_data": false, 00:07:45.435 "copy": true, 00:07:45.435 "nvme_iov_md": false 00:07:45.435 }, 00:07:45.435 "memory_domains": [ 00:07:45.435 { 00:07:45.435 "dma_device_id": "system", 00:07:45.435 "dma_device_type": 1 00:07:45.435 } 00:07:45.435 ], 00:07:45.435 "driver_specific": { 00:07:45.435 "nvme": [ 00:07:45.435 { 00:07:45.435 "trid": { 00:07:45.435 "trtype": "TCP", 00:07:45.435 "adrfam": "IPv4", 00:07:45.435 "traddr": "10.0.0.2", 00:07:45.435 "trsvcid": "4420", 00:07:45.435 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:45.435 }, 00:07:45.435 "ctrlr_data": { 00:07:45.435 "cntlid": 1, 00:07:45.435 "vendor_id": "0x8086", 00:07:45.435 "model_number": "SPDK bdev Controller", 00:07:45.435 "serial_number": "SPDK0", 00:07:45.435 "firmware_revision": "24.09.1", 00:07:45.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.435 "oacs": { 00:07:45.435 "security": 0, 00:07:45.435 "format": 0, 00:07:45.435 "firmware": 0, 00:07:45.435 "ns_manage": 0 00:07:45.435 }, 00:07:45.435 "multi_ctrlr": true, 00:07:45.435 "ana_reporting": false 00:07:45.435 }, 00:07:45.435 "vs": { 00:07:45.435 "nvme_version": "1.3" 00:07:45.435 }, 00:07:45.435 "ns_data": { 00:07:45.435 "id": 1, 00:07:45.435 "can_share": true 00:07:45.435 } 00:07:45.435 } 00:07:45.435 ], 00:07:45.435 "mp_policy": "active_passive" 00:07:45.435 } 00:07:45.435 } 00:07:45.435 ] 00:07:45.435 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3310639 00:07:45.435 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:45.435 21:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:45.435 Running I/O for 10 seconds... 00:07:46.822 Latency(us) 00:07:46.822 [2024-10-12T19:56:05.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.822 Nvme0n1 : 1.00 25145.00 98.22 0.00 0.00 0.00 0.00 0.00 00:07:46.822 [2024-10-12T19:56:05.311Z] =================================================================================================================== 00:07:46.822 [2024-10-12T19:56:05.311Z] Total : 25145.00 98.22 0.00 0.00 0.00 0.00 0.00 00:07:46.822 00:07:47.395 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:47.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.656 Nvme0n1 : 2.00 25354.50 99.04 0.00 0.00 0.00 0.00 0.00 00:07:47.656 [2024-10-12T19:56:06.145Z] =================================================================================================================== 00:07:47.656 [2024-10-12T19:56:06.145Z] Total : 25354.50 99.04 0.00 0.00 0.00 0.00 0.00 00:07:47.656 00:07:47.656 true 00:07:47.656 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:47.656 21:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:47.917 21:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:47.917 21:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:47.917 21:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3310639 00:07:48.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.489 Nvme0n1 : 3.00 25435.67 99.36 0.00 0.00 0.00 0.00 0.00 00:07:48.489 [2024-10-12T19:56:06.978Z] =================================================================================================================== 00:07:48.489 [2024-10-12T19:56:06.978Z] Total : 25435.67 99.36 0.00 0.00 0.00 0.00 0.00 00:07:48.489 00:07:49.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.433 Nvme0n1 : 4.00 25493.00 99.58 0.00 0.00 0.00 0.00 0.00 00:07:49.433 [2024-10-12T19:56:07.922Z] =================================================================================================================== 00:07:49.433 [2024-10-12T19:56:07.922Z] Total : 25493.00 99.58 0.00 0.00 0.00 0.00 0.00 00:07:49.433 00:07:50.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.816 Nvme0n1 : 5.00 25526.80 99.71 0.00 0.00 0.00 0.00 0.00 00:07:50.816 [2024-10-12T19:56:09.305Z] =================================================================================================================== 00:07:50.816 [2024-10-12T19:56:09.305Z] Total : 25526.80 99.71 0.00 0.00 0.00 0.00 0.00 00:07:50.816 00:07:51.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.757 Nvme0n1 : 6.00 25560.33 99.85 0.00 0.00 0.00 0.00 0.00 00:07:51.757 [2024-10-12T19:56:10.246Z] =================================================================================================================== 00:07:51.757 [2024-10-12T19:56:10.246Z] Total : 25560.33 99.85 0.00 0.00 0.00 0.00 0.00 00:07:51.757 00:07:52.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.699 Nvme0n1 : 7.00 25555.86 99.83 0.00 0.00 0.00 0.00 0.00 00:07:52.699 [2024-10-12T19:56:11.188Z] =================================================================================================================== 00:07:52.699 [2024-10-12T19:56:11.188Z] Total : 25555.86 99.83 0.00 0.00 0.00 0.00 0.00 00:07:52.699 00:07:53.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.640 Nvme0n1 : 8.00 25577.50 99.91 0.00 0.00 0.00 0.00 0.00 00:07:53.640 [2024-10-12T19:56:12.129Z] =================================================================================================================== 00:07:53.640 [2024-10-12T19:56:12.129Z] Total : 25577.50 99.91 0.00 0.00 0.00 0.00 0.00 00:07:53.640 00:07:54.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.583 Nvme0n1 : 9.00 25593.89 99.98 0.00 0.00 0.00 0.00 0.00 00:07:54.583 [2024-10-12T19:56:13.072Z] =================================================================================================================== 00:07:54.583 [2024-10-12T19:56:13.072Z] Total : 25593.89 99.98 0.00 0.00 0.00 0.00 0.00 00:07:54.583 00:07:55.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.526 Nvme0n1 : 10.00 25607.00 100.03 0.00 0.00 0.00 0.00 0.00 00:07:55.526 [2024-10-12T19:56:14.015Z] =================================================================================================================== 00:07:55.526 [2024-10-12T19:56:14.015Z] Total : 25607.00 100.03 0.00 0.00 0.00 0.00 0.00 00:07:55.526 00:07:55.526 00:07:55.526 Latency(us) 00:07:55.526 [2024-10-12T19:56:14.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.526 Nvme0n1 : 10.00 25603.98 100.02 0.00 0.00 4996.06 3072.00 14745.60 00:07:55.526 [2024-10-12T19:56:14.015Z] =================================================================================================================== 00:07:55.526 [2024-10-12T19:56:14.015Z] Total : 25603.98 100.02 0.00 0.00 4996.06 3072.00 14745.60 00:07:55.526 { 00:07:55.526 "results": [ 00:07:55.526 { 00:07:55.526 "job": "Nvme0n1", 00:07:55.526 "core_mask": "0x2", 00:07:55.526 "workload": "randwrite", 00:07:55.526 "status": "finished", 00:07:55.526 "queue_depth": 128, 00:07:55.526 "io_size": 4096, 00:07:55.526 "runtime": 10.003718, 00:07:55.526 "iops": 25603.98044007238, 00:07:55.526 "mibps": 100.01554859403274, 00:07:55.526 "io_failed": 0, 00:07:55.526 "io_timeout": 0, 00:07:55.526 "avg_latency_us": 4996.056214314066, 00:07:55.526 "min_latency_us": 3072.0, 00:07:55.526 "max_latency_us": 14745.6 00:07:55.526 } 00:07:55.526 ], 00:07:55.526 "core_count": 1 00:07:55.526 } 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3310388 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3310388 ']' 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3310388 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.526 21:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310388 00:07:55.526 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:55.526 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:55.527 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310388' 00:07:55.527 killing process with pid 3310388 00:07:55.527 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3310388 00:07:55.527 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.527 00:07:55.527 Latency(us) 00:07:55.527 [2024-10-12T19:56:14.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.527 [2024-10-12T19:56:14.016Z] =================================================================================================================== 00:07:55.527 [2024-10-12T19:56:14.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.527 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3310388 00:07:55.788 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.049 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.049 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:56.049 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3306549 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3306549 00:07:56.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3306549 Killed "${NVMF_APP[@]}" "$@" 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3312763 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3312763 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3312763 ']' 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.310 21:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.310 [2024-10-12 21:56:14.769277] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.310 [2024-10-12 21:56:14.769335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.570 [2024-10-12 21:56:14.855734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.570 [2024-10-12 21:56:14.885007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.570 [2024-10-12 21:56:14.885042] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.570 [2024-10-12 21:56:14.885047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.570 [2024-10-12 21:56:14.885052] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.570 [2024-10-12 21:56:14.885057] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.570 [2024-10-12 21:56:14.885073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.141 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:57.439 [2024-10-12 21:56:15.741463] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:57.439 [2024-10-12 21:56:15.741559] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:57.439 [2024-10-12 21:56:15.741582] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 96ecb626-48cd-495f-9367-c68f974ca840 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=96ecb626-48cd-495f-9367-c68f974ca840 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.439 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:57.732 21:56:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96ecb626-48cd-495f-9367-c68f974ca840 -t 2000 00:07:57.732 [ 00:07:57.732 { 00:07:57.732 "name": "96ecb626-48cd-495f-9367-c68f974ca840", 00:07:57.732 "aliases": [ 00:07:57.732 "lvs/lvol" 00:07:57.732 ], 00:07:57.732 "product_name": "Logical Volume", 00:07:57.732 "block_size": 4096, 00:07:57.732 "num_blocks": 38912, 00:07:57.732 "uuid": "96ecb626-48cd-495f-9367-c68f974ca840", 00:07:57.732 "assigned_rate_limits": { 00:07:57.732 "rw_ios_per_sec": 0, 00:07:57.732 "rw_mbytes_per_sec": 0, 00:07:57.732 "r_mbytes_per_sec": 0, 00:07:57.732 "w_mbytes_per_sec": 0 00:07:57.732 }, 00:07:57.732 "claimed": false, 00:07:57.732 "zoned": false, 00:07:57.732 "supported_io_types": { 00:07:57.732 "read": true, 00:07:57.732 "write": true, 00:07:57.732 "unmap": true, 00:07:57.732 "flush": false, 00:07:57.732 "reset": true, 00:07:57.732 "nvme_admin": false, 00:07:57.732 "nvme_io": false, 00:07:57.732 "nvme_io_md": false, 00:07:57.732 "write_zeroes": true, 00:07:57.732 "zcopy": false, 00:07:57.732 "get_zone_info": false, 00:07:57.732 "zone_management": false, 00:07:57.732 "zone_append": false, 00:07:57.732 "compare": false, 00:07:57.732 "compare_and_write": false, 00:07:57.732 "abort": false, 00:07:57.732 "seek_hole": true, 00:07:57.732 "seek_data": true, 00:07:57.732 "copy": false, 00:07:57.732 "nvme_iov_md": false 00:07:57.732 }, 00:07:57.732 "driver_specific": { 00:07:57.732 "lvol": { 00:07:57.732 "lvol_store_uuid": "97284c75-d3fe-409f-8d7a-074d05e1c848", 00:07:57.732 "base_bdev": "aio_bdev", 00:07:57.732 "thin_provision": false, 00:07:57.732 "num_allocated_clusters": 38, 00:07:57.732 "snapshot": false, 00:07:57.732 "clone": false, 00:07:57.732 "esnap_clone": false 00:07:57.732 } 00:07:57.732 } 00:07:57.732 } 00:07:57.732 ] 00:07:57.732 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:57.732 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:57.732 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:58.006 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:58.006 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:58.006 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:58.006 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:58.006 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:58.281 [2024-10-12 21:56:16.550031] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:58.281 request: 00:07:58.281 { 00:07:58.281 "uuid": "97284c75-d3fe-409f-8d7a-074d05e1c848", 00:07:58.281 "method": "bdev_lvol_get_lvstores", 00:07:58.281 "req_id": 1 00:07:58.281 } 00:07:58.281 Got JSON-RPC error response 00:07:58.281 response: 00:07:58.281 { 00:07:58.281 "code": -19, 00:07:58.281 "message": "No such device" 00:07:58.281 } 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.281 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:58.542 aio_bdev 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96ecb626-48cd-495f-9367-c68f974ca840 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=96ecb626-48cd-495f-9367-c68f974ca840 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:58.542 21:56:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:58.802 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96ecb626-48cd-495f-9367-c68f974ca840 -t 2000 00:07:59.062 [ 00:07:59.062 { 00:07:59.062 "name": "96ecb626-48cd-495f-9367-c68f974ca840", 00:07:59.062 "aliases": [ 00:07:59.062 "lvs/lvol" 00:07:59.062 ], 00:07:59.062 "product_name": "Logical Volume", 00:07:59.062 "block_size": 4096, 00:07:59.062 "num_blocks": 38912, 00:07:59.062 "uuid": "96ecb626-48cd-495f-9367-c68f974ca840", 00:07:59.062 "assigned_rate_limits": { 00:07:59.062 "rw_ios_per_sec": 0, 00:07:59.062 "rw_mbytes_per_sec": 0, 00:07:59.062 "r_mbytes_per_sec": 0, 00:07:59.062 "w_mbytes_per_sec": 0 00:07:59.062 }, 00:07:59.062 "claimed": false, 00:07:59.062 "zoned": false, 00:07:59.062 "supported_io_types": { 00:07:59.062 "read": true, 00:07:59.062 "write": true, 00:07:59.062 "unmap": true, 00:07:59.062 "flush": false, 00:07:59.062 "reset": true, 00:07:59.062 "nvme_admin": false, 00:07:59.062 "nvme_io": false, 00:07:59.062 "nvme_io_md": false, 00:07:59.062 "write_zeroes": true, 00:07:59.062 "zcopy": false, 00:07:59.062 "get_zone_info": false, 00:07:59.062 "zone_management": false, 00:07:59.062 "zone_append": false, 00:07:59.062 "compare": false, 00:07:59.062 "compare_and_write": false, 00:07:59.062 "abort": false, 00:07:59.062 "seek_hole": true, 00:07:59.062 "seek_data": true, 00:07:59.062 "copy": false, 00:07:59.062 "nvme_iov_md": false 00:07:59.062 }, 00:07:59.062 "driver_specific": { 00:07:59.062 "lvol": { 00:07:59.063 "lvol_store_uuid": "97284c75-d3fe-409f-8d7a-074d05e1c848", 00:07:59.063 "base_bdev": "aio_bdev", 00:07:59.063 "thin_provision": false, 00:07:59.063 "num_allocated_clusters": 38, 00:07:59.063 "snapshot": false, 00:07:59.063 "clone": false, 00:07:59.063 "esnap_clone": false 00:07:59.063 } 00:07:59.063 } 00:07:59.063 } 00:07:59.063 ] 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:59.063 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:59.323 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:59.323 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96ecb626-48cd-495f-9367-c68f974ca840 00:07:59.583 21:56:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97284c75-d3fe-409f-8d7a-074d05e1c848 00:07:59.583 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:59.843 00:07:59.843 real 0m17.409s 00:07:59.843 user 0m45.956s 00:07:59.843 sys 0m2.905s 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:59.843 ************************************ 00:07:59.843 END TEST lvs_grow_dirty 00:07:59.843 ************************************ 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:59.843 nvmf_trace.0 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.843 rmmod nvme_tcp 00:07:59.843 rmmod nvme_fabrics 00:07:59.843 rmmod nvme_keyring 00:07:59.843 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3312763 ']' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3312763 ']' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3312763' 00:08:00.103 killing process with pid 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3312763 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.103 21:56:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.649 00:08:02.649 real 0m44.690s 00:08:02.649 user 1m8.045s 00:08:02.649 sys 0m10.334s 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.649 ************************************ 00:08:02.649 END TEST nvmf_lvs_grow 00:08:02.649 ************************************ 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.649 ************************************ 00:08:02.649 START TEST nvmf_bdev_io_wait 00:08:02.649 ************************************ 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:02.649 * Looking for test storage... 00:08:02.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:02.649 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.650 --rc genhtml_branch_coverage=1 00:08:02.650 --rc genhtml_function_coverage=1 00:08:02.650 --rc genhtml_legend=1 00:08:02.650 --rc geninfo_all_blocks=1 00:08:02.650 --rc geninfo_unexecuted_blocks=1 00:08:02.650 00:08:02.650 ' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.650 --rc genhtml_branch_coverage=1 00:08:02.650 --rc genhtml_function_coverage=1 00:08:02.650 --rc genhtml_legend=1 00:08:02.650 --rc geninfo_all_blocks=1 00:08:02.650 --rc geninfo_unexecuted_blocks=1 00:08:02.650 00:08:02.650 ' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.650 --rc genhtml_branch_coverage=1 00:08:02.650 --rc genhtml_function_coverage=1 00:08:02.650 --rc genhtml_legend=1 00:08:02.650 --rc geninfo_all_blocks=1 00:08:02.650 --rc geninfo_unexecuted_blocks=1 00:08:02.650 00:08:02.650 ' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.650 --rc genhtml_branch_coverage=1 00:08:02.650 --rc genhtml_function_coverage=1 00:08:02.650 --rc genhtml_legend=1 00:08:02.650 --rc geninfo_all_blocks=1 00:08:02.650 --rc geninfo_unexecuted_blocks=1 00:08:02.650 00:08:02.650 ' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.650 21:56:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:10.789 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:10.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:10.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:10.789 21:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:10.789 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.789 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:08:10.790 00:08:10.790 --- 10.0.0.2 ping statistics --- 00:08:10.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.790 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:08:10.790 00:08:10.790 --- 10.0.0.1 ping statistics --- 00:08:10.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.790 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3317839 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3317839 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3317839 ']' 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.790 21:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 [2024-10-12 21:56:28.409810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.790 [2024-10-12 21:56:28.409874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.790 [2024-10-12 21:56:28.503490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.790 [2024-10-12 21:56:28.552398] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.790 [2024-10-12 21:56:28.552452] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.790 [2024-10-12 21:56:28.552461] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.790 [2024-10-12 21:56:28.552469] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.790 [2024-10-12 21:56:28.552475] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.790 [2024-10-12 21:56:28.552632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.790 [2024-10-12 21:56:28.552786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.790 [2024-10-12 21:56:28.552944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.790 [2024-10-12 21:56:28.552944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.790 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.790 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:10.790 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:10.790 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.790 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 [2024-10-12 21:56:29.364149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 Malloc0 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 [2024-10-12 21:56:29.443821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3318054 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3318058 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.052 { 00:08:11.052 "params": { 00:08:11.052 "name": "Nvme$subsystem", 00:08:11.052 "trtype": "$TEST_TRANSPORT", 00:08:11.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.052 "adrfam": "ipv4", 00:08:11.052 "trsvcid": "$NVMF_PORT", 00:08:11.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.052 "hdgst": ${hdgst:-false}, 00:08:11.052 "ddgst": ${ddgst:-false} 00:08:11.052 }, 00:08:11.052 "method": "bdev_nvme_attach_controller" 00:08:11.052 } 00:08:11.052 EOF 00:08:11.052 )") 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3318060 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3318064 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.052 { 00:08:11.052 "params": { 00:08:11.052 "name": "Nvme$subsystem", 00:08:11.052 "trtype": "$TEST_TRANSPORT", 00:08:11.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.052 "adrfam": "ipv4", 00:08:11.052 "trsvcid": "$NVMF_PORT", 00:08:11.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.052 "hdgst": ${hdgst:-false}, 00:08:11.052 "ddgst": ${ddgst:-false} 00:08:11.052 }, 00:08:11.052 "method": "bdev_nvme_attach_controller" 00:08:11.052 } 00:08:11.052 EOF 00:08:11.052 )") 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.052 { 00:08:11.052 "params": { 00:08:11.052 "name": "Nvme$subsystem", 00:08:11.052 "trtype": "$TEST_TRANSPORT", 00:08:11.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.052 "adrfam": "ipv4", 00:08:11.052 "trsvcid": "$NVMF_PORT", 00:08:11.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.052 "hdgst": ${hdgst:-false}, 00:08:11.052 "ddgst": ${ddgst:-false} 00:08:11.052 }, 00:08:11.052 "method": "bdev_nvme_attach_controller" 00:08:11.052 } 00:08:11.052 EOF 00:08:11.052 )") 00:08:11.052 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:11.053 { 00:08:11.053 "params": { 00:08:11.053 "name": "Nvme$subsystem", 00:08:11.053 "trtype": "$TEST_TRANSPORT", 00:08:11.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:11.053 "adrfam": "ipv4", 00:08:11.053 "trsvcid": "$NVMF_PORT", 00:08:11.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:11.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:11.053 "hdgst": ${hdgst:-false}, 00:08:11.053 "ddgst": ${ddgst:-false} 00:08:11.053 }, 00:08:11.053 "method": "bdev_nvme_attach_controller" 00:08:11.053 } 00:08:11.053 EOF 00:08:11.053 )") 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3318054 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.053 "params": { 00:08:11.053 "name": "Nvme1", 00:08:11.053 "trtype": "tcp", 00:08:11.053 "traddr": "10.0.0.2", 00:08:11.053 "adrfam": "ipv4", 00:08:11.053 "trsvcid": "4420", 00:08:11.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:11.053 "hdgst": false, 00:08:11.053 "ddgst": false 00:08:11.053 }, 00:08:11.053 "method": "bdev_nvme_attach_controller" 00:08:11.053 }' 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.053 "params": { 00:08:11.053 "name": "Nvme1", 00:08:11.053 "trtype": "tcp", 00:08:11.053 "traddr": "10.0.0.2", 00:08:11.053 "adrfam": "ipv4", 00:08:11.053 "trsvcid": "4420", 00:08:11.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:11.053 "hdgst": false, 00:08:11.053 "ddgst": false 00:08:11.053 }, 00:08:11.053 "method": "bdev_nvme_attach_controller" 00:08:11.053 }' 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.053 "params": { 00:08:11.053 "name": "Nvme1", 00:08:11.053 "trtype": "tcp", 00:08:11.053 "traddr": "10.0.0.2", 00:08:11.053 "adrfam": "ipv4", 00:08:11.053 "trsvcid": "4420", 00:08:11.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:11.053 "hdgst": false, 00:08:11.053 "ddgst": false 00:08:11.053 }, 00:08:11.053 "method": "bdev_nvme_attach_controller" 00:08:11.053 }' 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:08:11.053 21:56:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:11.053 "params": { 00:08:11.053 "name": "Nvme1", 00:08:11.053 "trtype": "tcp", 00:08:11.053 "traddr": "10.0.0.2", 00:08:11.053 "adrfam": "ipv4", 00:08:11.053 "trsvcid": "4420", 00:08:11.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:11.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:11.053 "hdgst": false, 00:08:11.053 "ddgst": false 00:08:11.053 }, 00:08:11.053 "method": "bdev_nvme_attach_controller" 00:08:11.053 }' 00:08:11.053 [2024-10-12 21:56:29.501556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.053 [2024-10-12 21:56:29.501625] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:11.053 [2024-10-12 21:56:29.503405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.053 [2024-10-12 21:56:29.503476] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:11.053 [2024-10-12 21:56:29.504256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.053 [2024-10-12 21:56:29.504319] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:11.053 [2024-10-12 21:56:29.507680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.053 [2024-10-12 21:56:29.507740] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:11.314 [2024-10-12 21:56:29.693735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.314 [2024-10-12 21:56:29.720977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:08:11.314 [2024-10-12 21:56:29.781726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.575 [2024-10-12 21:56:29.805571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:08:11.575 [2024-10-12 21:56:29.827154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.575 [2024-10-12 21:56:29.853857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:08:11.575 [2024-10-12 21:56:29.921207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.575 [2024-10-12 21:56:29.953058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:11.835 Running I/O for 1 seconds... 00:08:11.835 Running I/O for 1 seconds... 00:08:12.096 Running I/O for 1 seconds... 00:08:12.096 Running I/O for 1 seconds... 00:08:12.929 7064.00 IOPS, 27.59 MiB/s 00:08:12.929 Latency(us) 00:08:12.929 [2024-10-12T19:56:31.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.929 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:12.929 Nvme1n1 : 1.02 7031.84 27.47 0.00 0.00 18043.94 7755.09 32549.55 00:08:12.929 [2024-10-12T19:56:31.418Z] =================================================================================================================== 00:08:12.929 [2024-10-12T19:56:31.418Z] Total : 7031.84 27.47 0.00 0.00 18043.94 7755.09 32549.55 00:08:12.929 5978.00 IOPS, 23.35 MiB/s 00:08:12.929 Latency(us) 00:08:12.929 [2024-10-12T19:56:31.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.929 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:12.929 Nvme1n1 : 1.01 6033.05 23.57 0.00 0.00 21037.10 12178.77 36481.71 00:08:12.929 [2024-10-12T19:56:31.418Z] =================================================================================================================== 00:08:12.929 [2024-10-12T19:56:31.418Z] Total : 6033.05 23.57 0.00 0.00 21037.10 12178.77 36481.71 00:08:13.190 187776.00 IOPS, 733.50 MiB/s 00:08:13.190 Latency(us) 00:08:13.190 [2024-10-12T19:56:31.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.190 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:13.190 Nvme1n1 : 1.00 187397.59 732.02 0.00 0.00 679.36 308.91 1979.73 00:08:13.190 [2024-10-12T19:56:31.679Z] =================================================================================================================== 00:08:13.190 [2024-10-12T19:56:31.679Z] Total : 187397.59 732.02 0.00 0.00 679.36 308.91 1979.73 00:08:13.190 7302.00 IOPS, 28.52 MiB/s 00:08:13.190 Latency(us) 00:08:13.190 [2024-10-12T19:56:31.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.190 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:13.190 Nvme1n1 : 1.01 7413.18 28.96 0.00 0.00 17215.72 4232.53 48059.73 00:08:13.190 [2024-10-12T19:56:31.679Z] =================================================================================================================== 00:08:13.190 [2024-10-12T19:56:31.679Z] Total : 7413.18 28.96 0.00 0.00 17215.72 4232.53 48059.73 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3318058 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3318060 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3318064 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:13.190 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.191 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.191 rmmod nvme_tcp 00:08:13.191 rmmod nvme_fabrics 00:08:13.191 rmmod nvme_keyring 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3317839 ']' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3317839 ']' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3317839' 00:08:13.451 killing process with pid 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3317839 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.451 21:56:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.997 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.997 00:08:15.997 real 0m13.279s 00:08:15.997 user 0m20.983s 00:08:15.997 sys 0m7.437s 00:08:15.997 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.997 21:56:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.997 ************************************ 00:08:15.997 END TEST nvmf_bdev_io_wait 00:08:15.997 ************************************ 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.997 ************************************ 00:08:15.997 START TEST nvmf_queue_depth 00:08:15.997 ************************************ 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:15.997 * Looking for test storage... 00:08:15.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:15.997 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.998 --rc genhtml_branch_coverage=1 00:08:15.998 --rc genhtml_function_coverage=1 00:08:15.998 --rc genhtml_legend=1 00:08:15.998 --rc geninfo_all_blocks=1 00:08:15.998 --rc geninfo_unexecuted_blocks=1 00:08:15.998 00:08:15.998 ' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.998 --rc genhtml_branch_coverage=1 00:08:15.998 --rc genhtml_function_coverage=1 00:08:15.998 --rc genhtml_legend=1 00:08:15.998 --rc geninfo_all_blocks=1 00:08:15.998 --rc geninfo_unexecuted_blocks=1 00:08:15.998 00:08:15.998 ' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.998 --rc genhtml_branch_coverage=1 00:08:15.998 --rc genhtml_function_coverage=1 00:08:15.998 --rc genhtml_legend=1 00:08:15.998 --rc geninfo_all_blocks=1 00:08:15.998 --rc geninfo_unexecuted_blocks=1 00:08:15.998 00:08:15.998 ' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:15.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.998 --rc genhtml_branch_coverage=1 00:08:15.998 --rc genhtml_function_coverage=1 00:08:15.998 --rc genhtml_legend=1 00:08:15.998 --rc geninfo_all_blocks=1 00:08:15.998 --rc geninfo_unexecuted_blocks=1 00:08:15.998 00:08:15.998 ' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:15.998 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.999 21:56:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:24.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:24.142 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:24.142 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:24.142 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:24.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:08:24.143 00:08:24.143 --- 10.0.0.2 ping statistics --- 00:08:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.143 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:24.143 00:08:24.143 --- 10.0.0.1 ping statistics --- 00:08:24.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.143 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3322841 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3322841 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3322841 ']' 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.143 21:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.143 [2024-10-12 21:56:41.884754] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.143 [2024-10-12 21:56:41.884827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.143 [2024-10-12 21:56:41.976288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.143 [2024-10-12 21:56:42.023253] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.143 [2024-10-12 21:56:42.023309] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.143 [2024-10-12 21:56:42.023318] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.143 [2024-10-12 21:56:42.023325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.143 [2024-10-12 21:56:42.023331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.143 [2024-10-12 21:56:42.023354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 [2024-10-12 21:56:42.741591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 Malloc0 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 [2024-10-12 21:56:42.821387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3322929 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3322929 /var/tmp/bdevperf.sock 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3322929 ']' 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.405 21:56:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.405 [2024-10-12 21:56:42.878722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.405 [2024-10-12 21:56:42.878785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322929 ] 00:08:24.666 [2024-10-12 21:56:42.960177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.666 [2024-10-12 21:56:43.006404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.238 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.238 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:25.238 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:25.238 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.238 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.500 NVMe0n1 00:08:25.500 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.500 21:56:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.500 Running I/O for 10 seconds... 00:08:27.828 8192.00 IOPS, 32.00 MiB/s [2024-10-12T19:56:47.260Z] 8695.50 IOPS, 33.97 MiB/s [2024-10-12T19:56:48.202Z] 9831.00 IOPS, 38.40 MiB/s [2024-10-12T19:56:49.144Z] 10593.25 IOPS, 41.38 MiB/s [2024-10-12T19:56:50.085Z] 11059.20 IOPS, 43.20 MiB/s [2024-10-12T19:56:51.027Z] 11465.17 IOPS, 44.79 MiB/s [2024-10-12T19:56:51.969Z] 11821.43 IOPS, 46.18 MiB/s [2024-10-12T19:56:53.353Z] 12029.50 IOPS, 46.99 MiB/s [2024-10-12T19:56:54.295Z] 12196.00 IOPS, 47.64 MiB/s [2024-10-12T19:56:54.295Z] 12378.30 IOPS, 48.35 MiB/s 00:08:35.806 Latency(us) 00:08:35.806 [2024-10-12T19:56:54.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.806 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:35.806 Verification LBA range: start 0x0 length 0x4000 00:08:35.806 NVMe0n1 : 10.06 12404.80 48.46 0.00 0.00 82279.53 25449.81 76021.76 00:08:35.806 [2024-10-12T19:56:54.295Z] =================================================================================================================== 00:08:35.806 [2024-10-12T19:56:54.295Z] Total : 12404.80 48.46 0.00 0.00 82279.53 25449.81 76021.76 00:08:35.806 { 00:08:35.806 "results": [ 00:08:35.806 { 00:08:35.806 "job": "NVMe0n1", 00:08:35.806 "core_mask": "0x1", 00:08:35.806 "workload": "verify", 00:08:35.806 "status": "finished", 00:08:35.806 "verify_range": { 00:08:35.806 "start": 0, 00:08:35.806 "length": 16384 00:08:35.806 }, 00:08:35.806 "queue_depth": 1024, 00:08:35.806 "io_size": 4096, 00:08:35.806 "runtime": 10.061189, 00:08:35.806 "iops": 12404.79629196907, 00:08:35.806 "mibps": 48.45623551550418, 00:08:35.806 "io_failed": 0, 00:08:35.806 "io_timeout": 0, 00:08:35.806 "avg_latency_us": 82279.53292790736, 00:08:35.806 "min_latency_us": 25449.81333333333, 00:08:35.806 "max_latency_us": 76021.76 00:08:35.806 } 00:08:35.806 ], 00:08:35.806 "core_count": 1 00:08:35.806 } 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3322929 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3322929 ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3322929 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3322929 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3322929' 00:08:35.806 killing process with pid 3322929 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3322929 00:08:35.806 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.806 00:08:35.806 Latency(us) 00:08:35.806 [2024-10-12T19:56:54.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.806 [2024-10-12T19:56:54.295Z] =================================================================================================================== 00:08:35.806 [2024-10-12T19:56:54.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3322929 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.806 rmmod nvme_tcp 00:08:35.806 rmmod nvme_fabrics 00:08:35.806 rmmod nvme_keyring 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3322841 ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3322841 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3322841 ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3322841 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.806 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3322841 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3322841' 00:08:36.067 killing process with pid 3322841 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3322841 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3322841 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.067 21:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:38.613 00:08:38.613 real 0m22.497s 00:08:38.613 user 0m25.702s 00:08:38.613 sys 0m7.095s 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.613 ************************************ 00:08:38.613 END TEST nvmf_queue_depth 00:08:38.613 ************************************ 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.613 ************************************ 00:08:38.613 START TEST nvmf_target_multipath 00:08:38.613 ************************************ 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:38.613 * Looking for test storage... 00:08:38.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.613 --rc genhtml_branch_coverage=1 00:08:38.613 --rc genhtml_function_coverage=1 00:08:38.613 --rc genhtml_legend=1 00:08:38.613 --rc geninfo_all_blocks=1 00:08:38.613 --rc geninfo_unexecuted_blocks=1 00:08:38.613 00:08:38.613 ' 00:08:38.613 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.613 --rc genhtml_branch_coverage=1 00:08:38.613 --rc genhtml_function_coverage=1 00:08:38.614 --rc genhtml_legend=1 00:08:38.614 --rc geninfo_all_blocks=1 00:08:38.614 --rc geninfo_unexecuted_blocks=1 00:08:38.614 00:08:38.614 ' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.614 --rc genhtml_branch_coverage=1 00:08:38.614 --rc genhtml_function_coverage=1 00:08:38.614 --rc genhtml_legend=1 00:08:38.614 --rc geninfo_all_blocks=1 00:08:38.614 --rc geninfo_unexecuted_blocks=1 00:08:38.614 00:08:38.614 ' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.614 --rc genhtml_branch_coverage=1 00:08:38.614 --rc genhtml_function_coverage=1 00:08:38.614 --rc genhtml_legend=1 00:08:38.614 --rc geninfo_all_blocks=1 00:08:38.614 --rc geninfo_unexecuted_blocks=1 00:08:38.614 00:08:38.614 ' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:38.614 21:56:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:46.759 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:46.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:46.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:46.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:46.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:46.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:08:46.760 00:08:46.760 --- 10.0.0.2 ping statistics --- 00:08:46.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.760 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:08:46.760 00:08:46.760 --- 10.0.0.1 ping statistics --- 00:08:46.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.760 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:46.760 only one NIC for nvmf test 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.760 rmmod nvme_tcp 00:08:46.760 rmmod nvme_fabrics 00:08:46.760 rmmod nvme_keyring 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.760 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.761 21:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.146 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.408 00:08:48.408 real 0m10.031s 00:08:48.408 user 0m2.183s 00:08:48.408 sys 0m5.803s 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:48.408 ************************************ 00:08:48.408 END TEST nvmf_target_multipath 00:08:48.408 ************************************ 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.408 ************************************ 00:08:48.408 START TEST nvmf_zcopy 00:08:48.408 ************************************ 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:48.408 * Looking for test storage... 00:08:48.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.408 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.670 --rc genhtml_branch_coverage=1 00:08:48.670 --rc genhtml_function_coverage=1 00:08:48.670 --rc genhtml_legend=1 00:08:48.670 --rc geninfo_all_blocks=1 00:08:48.670 --rc geninfo_unexecuted_blocks=1 00:08:48.670 00:08:48.670 ' 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.670 --rc genhtml_branch_coverage=1 00:08:48.670 --rc genhtml_function_coverage=1 00:08:48.670 --rc genhtml_legend=1 00:08:48.670 --rc geninfo_all_blocks=1 00:08:48.670 --rc geninfo_unexecuted_blocks=1 00:08:48.670 00:08:48.670 ' 00:08:48.670 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.670 --rc genhtml_branch_coverage=1 00:08:48.670 --rc genhtml_function_coverage=1 00:08:48.670 --rc genhtml_legend=1 00:08:48.670 --rc geninfo_all_blocks=1 00:08:48.670 --rc geninfo_unexecuted_blocks=1 00:08:48.670 00:08:48.670 ' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.671 --rc genhtml_branch_coverage=1 00:08:48.671 --rc genhtml_function_coverage=1 00:08:48.671 --rc genhtml_legend=1 00:08:48.671 --rc geninfo_all_blocks=1 00:08:48.671 --rc geninfo_unexecuted_blocks=1 00:08:48.671 00:08:48.671 ' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.671 21:57:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:56.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:56.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:56.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:56.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:56.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:08:56.817 00:08:56.817 --- 10.0.0.2 ping statistics --- 00:08:56.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.817 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:56.817 00:08:56.817 --- 10.0.0.1 ping statistics --- 00:08:56.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.817 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:56.817 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3334388 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3334388 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3334388 ']' 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.818 21:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.818 [2024-10-12 21:57:14.616576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:56.818 [2024-10-12 21:57:14.616650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.818 [2024-10-12 21:57:14.704883] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.818 [2024-10-12 21:57:14.751965] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.818 [2024-10-12 21:57:14.752014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.818 [2024-10-12 21:57:14.752023] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.818 [2024-10-12 21:57:14.752030] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.818 [2024-10-12 21:57:14.752036] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.818 [2024-10-12 21:57:14.752060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.079 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 [2024-10-12 21:57:15.487333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 [2024-10-12 21:57:15.511635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.080 malloc0 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.080 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:57.341 { 00:08:57.341 "params": { 00:08:57.341 "name": "Nvme$subsystem", 00:08:57.341 "trtype": "$TEST_TRANSPORT", 00:08:57.341 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.341 "adrfam": "ipv4", 00:08:57.341 "trsvcid": "$NVMF_PORT", 00:08:57.341 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.341 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.341 "hdgst": ${hdgst:-false}, 00:08:57.341 "ddgst": ${ddgst:-false} 00:08:57.341 }, 00:08:57.341 "method": "bdev_nvme_attach_controller" 00:08:57.341 } 00:08:57.341 EOF 00:08:57.341 )") 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:57.341 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:57.342 21:57:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:57.342 "params": { 00:08:57.342 "name": "Nvme1", 00:08:57.342 "trtype": "tcp", 00:08:57.342 "traddr": "10.0.0.2", 00:08:57.342 "adrfam": "ipv4", 00:08:57.342 "trsvcid": "4420", 00:08:57.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.342 "hdgst": false, 00:08:57.342 "ddgst": false 00:08:57.342 }, 00:08:57.342 "method": "bdev_nvme_attach_controller" 00:08:57.342 }' 00:08:57.342 [2024-10-12 21:57:15.622669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.342 [2024-10-12 21:57:15.622735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3334549 ] 00:08:57.342 [2024-10-12 21:57:15.703722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.342 [2024-10-12 21:57:15.749503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.603 Running I/O for 10 seconds... 00:08:59.933 6458.00 IOPS, 50.45 MiB/s [2024-10-12T19:57:19.365Z] 6517.00 IOPS, 50.91 MiB/s [2024-10-12T19:57:20.370Z] 7418.33 IOPS, 57.96 MiB/s [2024-10-12T19:57:21.334Z] 8006.25 IOPS, 62.55 MiB/s [2024-10-12T19:57:22.276Z] 8363.60 IOPS, 65.34 MiB/s [2024-10-12T19:57:23.220Z] 8596.50 IOPS, 67.16 MiB/s [2024-10-12T19:57:24.162Z] 8764.29 IOPS, 68.47 MiB/s [2024-10-12T19:57:25.105Z] 8888.38 IOPS, 69.44 MiB/s [2024-10-12T19:57:26.048Z] 8986.89 IOPS, 70.21 MiB/s [2024-10-12T19:57:26.310Z] 9066.30 IOPS, 70.83 MiB/s 00:09:07.821 Latency(us) 00:09:07.821 [2024-10-12T19:57:26.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:07.821 Verification LBA range: start 0x0 length 0x1000 00:09:07.821 Nvme1n1 : 10.01 9067.92 70.84 0.00 0.00 14069.02 1802.24 27852.80 00:09:07.821 [2024-10-12T19:57:26.310Z] =================================================================================================================== 00:09:07.821 [2024-10-12T19:57:26.310Z] Total : 9067.92 70.84 0.00 0.00 14069.02 1802.24 27852.80 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3336576 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:07.821 { 00:09:07.821 "params": { 00:09:07.821 "name": "Nvme$subsystem", 00:09:07.821 "trtype": "$TEST_TRANSPORT", 00:09:07.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.821 "adrfam": "ipv4", 00:09:07.821 "trsvcid": "$NVMF_PORT", 00:09:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.821 "hdgst": ${hdgst:-false}, 00:09:07.821 "ddgst": ${ddgst:-false} 00:09:07.821 }, 00:09:07.821 "method": "bdev_nvme_attach_controller" 00:09:07.821 } 00:09:07.821 EOF 00:09:07.821 )") 00:09:07.821 [2024-10-12 21:57:26.169957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.169987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:09:07.821 [2024-10-12 21:57:26.177950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.177960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:09:07.821 21:57:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:07.821 "params": { 00:09:07.821 "name": "Nvme1", 00:09:07.821 "trtype": "tcp", 00:09:07.821 "traddr": "10.0.0.2", 00:09:07.821 "adrfam": "ipv4", 00:09:07.821 "trsvcid": "4420", 00:09:07.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:07.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:07.821 "hdgst": false, 00:09:07.821 "ddgst": false 00:09:07.821 }, 00:09:07.821 "method": "bdev_nvme_attach_controller" 00:09:07.821 }' 00:09:07.821 [2024-10-12 21:57:26.185969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.185977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.193989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.193997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.206020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.206029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.214677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:07.821 [2024-10-12 21:57:26.214725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336576 ] 00:09:07.821 [2024-10-12 21:57:26.218051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.218058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.230082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.230095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.242116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.242124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.250138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.250146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.258156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.258165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.266176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.266184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.274195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.274203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.286225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.286233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.821 [2024-10-12 21:57:26.288616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.821 [2024-10-12 21:57:26.298258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.821 [2024-10-12 21:57:26.298267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.310289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.310304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.316779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.082 [2024-10-12 21:57:26.322318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.322327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.334355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.334370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.346385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.346396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.358413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.358423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.370442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.370452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.382484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.382501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.394510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.394521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.406542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.406553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.418570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.418580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.430601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.430614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.442633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.442642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.454665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.454674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.466698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.466708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.478729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.478737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.490762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.490771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.502794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.502803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.514826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.514836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.526858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.526866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.538889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.538898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.082 [2024-10-12 21:57:26.550921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.082 [2024-10-12 21:57:26.550931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.600577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.600593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.611085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.611095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 Running I/O for 5 seconds... 00:09:08.343 [2024-10-12 21:57:26.627011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.627028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.639285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.639302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.652227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.652245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.665540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.665557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.678870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.678885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.691817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.691833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.704454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.704475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.717262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.717279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.731180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.731196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.743815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.743832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.756986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.757002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.770096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.770117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.783243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.783258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.796794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.796810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.810338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.810354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.343 [2024-10-12 21:57:26.823308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.343 [2024-10-12 21:57:26.823324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.603 [2024-10-12 21:57:26.836610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.603 [2024-10-12 21:57:26.836625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.603 [2024-10-12 21:57:26.850027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.850042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.863266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.863282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.876768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.876783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.890490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.890505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.903484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.903499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.916308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.916323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.929681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.929696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.942952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.942966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.956241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.956264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.969002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.969017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.981383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.981398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:26.994668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:26.994682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.008182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.008197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.020789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.020804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.034146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.034162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.047512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.047527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.060225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.060239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.073410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.073425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.604 [2024-10-12 21:57:27.086128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.604 [2024-10-12 21:57:27.086143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.099280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.099295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.112922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.112937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.125142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.125157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.138283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.138298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.151502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.151517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.164858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.164873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.178646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.178661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.192295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.192310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.205748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.205762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.219211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.219225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.231787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.231801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.244138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.244153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.257002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.257017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.270624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.270639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.284220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.284235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.296671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.296685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.309307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.309321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.322228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.322243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.335001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.335015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.865 [2024-10-12 21:57:27.347483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.865 [2024-10-12 21:57:27.347497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.359574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.359590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.372339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.372354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.385154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.385168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.398000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.398015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.411280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.411295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.423718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.423733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.436411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.436426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.449412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.449427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.463131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.463146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.475537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.475551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.488933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.488948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.502204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.502220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.515315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.515330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.528636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.528652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.541406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.541420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.553497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.553512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.566794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.566808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.580209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.580223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.592663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.592678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.126 [2024-10-12 21:57:27.605648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.126 [2024-10-12 21:57:27.605663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.386 19109.00 IOPS, 149.29 MiB/s [2024-10-12T19:57:27.875Z] [2024-10-12 21:57:27.619119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.386 [2024-10-12 21:57:27.619134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.631891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.631907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.645328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.645343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.658271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.658287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.670603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.670619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.683896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.683915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.697216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.697231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.710342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.710357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.723714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.723729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.736396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.736411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.750133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.750148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.763094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.763112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.775647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.775662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.788550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.788565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.801763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.801778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.815077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.815092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.828391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.828406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.841869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.841884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.854338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.854353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.387 [2024-10-12 21:57:27.867013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.387 [2024-10-12 21:57:27.867028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.880442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.880458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.893184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.893199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.906670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.906686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.919426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.919441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.931933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.931953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.944475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.944490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.957783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.957798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.970369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.970384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.983302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.983317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:27.996275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:27.996290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.009693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.009708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.022387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.022403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.035341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.035356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.048580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.048596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.061231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.061246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.074354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.074369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.087808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.087824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.101034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.101049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.114360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.114375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.646 [2024-10-12 21:57:28.128055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.646 [2024-10-12 21:57:28.128070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.140754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.140770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.154562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.154578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.168306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.168321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.181740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.181759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.195185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.195202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.208551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.208567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.221702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.221718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.234584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.234600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.248138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.248153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.260603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.260619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.273486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.273501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.286395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.286410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.299742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.299757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.312747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.312762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.325119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.325135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.338355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.338371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.351579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.351595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.364717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.364732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.378180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.378195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.907 [2024-10-12 21:57:28.391548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.907 [2024-10-12 21:57:28.391564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.167 [2024-10-12 21:57:28.405031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.167 [2024-10-12 21:57:28.405047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.167 [2024-10-12 21:57:28.418396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.167 [2024-10-12 21:57:28.418412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.167 [2024-10-12 21:57:28.431278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.167 [2024-10-12 21:57:28.431298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.167 [2024-10-12 21:57:28.444212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.444228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.457195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.457211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.470335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.470351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.483794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.483809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.497071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.497086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.510301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.510316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.523780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.523795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.536514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.536530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.549597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.549612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.562891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.562907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.576398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.576413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.588962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.588977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.601853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.601868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.614244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.614259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 19224.00 IOPS, 150.19 MiB/s [2024-10-12T19:57:28.657Z] [2024-10-12 21:57:28.627392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.627407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.640670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.640686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.168 [2024-10-12 21:57:28.654156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.168 [2024-10-12 21:57:28.654172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.428 [2024-10-12 21:57:28.667377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.428 [2024-10-12 21:57:28.667394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.428 [2024-10-12 21:57:28.680553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.428 [2024-10-12 21:57:28.680568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.428 [2024-10-12 21:57:28.693488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.428 [2024-10-12 21:57:28.693503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.428 [2024-10-12 21:57:28.706648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.428 [2024-10-12 21:57:28.706663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.428 [2024-10-12 21:57:28.719854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.719868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.732749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.732764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.745595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.745609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.757860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.757875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.770837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.770852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.783651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.783666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.796812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.796827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.809332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.809347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.821580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.821595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.833964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.833979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.846200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.846215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.859510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.859525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.872981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.872996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.886508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.886523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.899680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.899695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.429 [2024-10-12 21:57:28.912650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.429 [2024-10-12 21:57:28.912665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.925416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.925432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.937987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.938002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.951547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.951562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.965063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.965078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.978571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.978586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:28.992040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:28.992055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.004849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.004865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.018136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.018152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.030369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.030385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.043383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.043399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.056114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.056129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.068420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.068435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.080910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.080924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.094673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.094688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.107290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.107305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.689 [2024-10-12 21:57:29.120513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.689 [2024-10-12 21:57:29.120528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.690 [2024-10-12 21:57:29.134255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.690 [2024-10-12 21:57:29.134270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.690 [2024-10-12 21:57:29.147410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.690 [2024-10-12 21:57:29.147425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.690 [2024-10-12 21:57:29.160665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.690 [2024-10-12 21:57:29.160680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.690 [2024-10-12 21:57:29.173444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.690 [2024-10-12 21:57:29.173459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.186834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.186850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.199402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.199417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.212741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.212756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.225415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.225429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.238310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.238325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.251145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.251159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.264458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.264472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.278240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.278254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.291011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.291026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.303851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.303866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.317268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.317283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.330654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.330669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.343219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.343234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.356655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.356670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.369819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.369834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.383012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.383026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.396307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.396321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.409777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.409795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.422855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.422870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.950 [2024-10-12 21:57:29.435599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.950 [2024-10-12 21:57:29.435614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.448805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.448820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.461978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.461993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.475171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.475186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.488646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.488662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.501757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.501772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.514507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.514522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.527477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.527492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.540887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.540902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.554346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.554361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.567261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.567278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.580064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.580080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.592364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.592380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.605792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.605807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.619146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.619162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 19254.00 IOPS, 150.42 MiB/s [2024-10-12T19:57:29.700Z] [2024-10-12 21:57:29.632573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.632589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.645871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.645887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.659611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.659630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.672341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.672357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.685533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.685549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.211 [2024-10-12 21:57:29.699428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.211 [2024-10-12 21:57:29.699444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.711812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.711828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.725017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.725033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.737768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.737784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.751133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.751149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.764440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.764456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.776795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.776811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.789463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.789477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.802754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.802770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.815681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.815697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.828303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.828319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.841544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.841560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.854010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.854026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.866476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.866491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.879389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.472 [2024-10-12 21:57:29.879404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.472 [2024-10-12 21:57:29.892685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.892702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.473 [2024-10-12 21:57:29.905269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.905289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.473 [2024-10-12 21:57:29.918177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.918193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.473 [2024-10-12 21:57:29.931513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.931529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.473 [2024-10-12 21:57:29.944773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.944789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.473 [2024-10-12 21:57:29.958095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.473 [2024-10-12 21:57:29.958114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:29.971517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:29.971532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:29.984442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:29.984457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:29.997432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:29.997447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.010427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.010443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.023615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.023631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.036498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.036514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.049619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.049635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.062776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.062791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.076210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.076226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.089920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.733 [2024-10-12 21:57:30.089936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.733 [2024-10-12 21:57:30.102728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.102745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.115127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.115143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.128530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.128546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.141321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.141336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.154840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.154856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.167711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.167726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.180892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.180907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.194254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.194269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.207476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.207491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.734 [2024-10-12 21:57:30.220882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.734 [2024-10-12 21:57:30.220897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.234445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.234461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.246982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.246997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.259339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.259355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.272435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.272450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.285722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.285738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.298941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.298956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.312144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.312159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.325539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.325554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.338962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.338977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.352260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.352275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.365384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.365399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.378835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.378850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.391149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.391164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.403382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.403397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.416642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.416657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.430077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.995 [2024-10-12 21:57:30.430092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.995 [2024-10-12 21:57:30.443238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.996 [2024-10-12 21:57:30.443253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.996 [2024-10-12 21:57:30.456821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.996 [2024-10-12 21:57:30.456835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.996 [2024-10-12 21:57:30.469256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.996 [2024-10-12 21:57:30.469270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.996 [2024-10-12 21:57:30.481794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.996 [2024-10-12 21:57:30.481809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.494080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.494095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.506478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.506493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.520018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.520034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.533471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.533487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.546426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.546442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.559868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.559883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.573381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.573395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.586629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.586645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.599314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.599329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.611907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.611923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 19268.00 IOPS, 150.53 MiB/s [2024-10-12T19:57:30.745Z] [2024-10-12 21:57:30.625244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.625259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.637994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.638016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.651884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.651899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.665028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.665043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.678532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.678547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.691277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.691292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.703471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.703487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.716595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.716611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.729703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.729718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.256 [2024-10-12 21:57:30.743187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.256 [2024-10-12 21:57:30.743202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.756523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.756538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.770025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.770040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.783722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.783736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.797143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.797158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.810583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.810598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.823569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.823584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.836915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.836930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.849698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.849713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.862781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.862795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.875914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.875928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.889392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.889411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.902154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.902169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.915080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.915095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.927612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.927626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.940278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.940293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.952481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.952496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.966081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.966096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.978766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.978780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:30.991366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:30.991381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.518 [2024-10-12 21:57:31.004352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.518 [2024-10-12 21:57:31.004368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.017166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.017183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.030110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.030125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.043566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.043581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.056137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.056152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.069770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.069785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.083156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.083171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.096505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.096520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.109523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.109538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.122446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.122461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.135267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.135286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.147739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.147754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.161163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.161179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.173690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.173705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.186793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.186808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.200054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.200069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.213179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.213194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.226516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.226531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.240061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.240076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.252699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.252714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.779 [2024-10-12 21:57:31.266099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.779 [2024-10-12 21:57:31.266119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.279040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.279056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.292458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.292474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.306278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.306294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.318794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.318810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.332091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.332113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.345415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.345431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.358616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.358631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.372109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.372125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.385418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.385438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.398824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.398840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.412397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.412413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.425162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.425177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.437950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.437966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.450610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.450626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.462933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.040 [2024-10-12 21:57:31.462948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.040 [2024-10-12 21:57:31.476771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.041 [2024-10-12 21:57:31.476787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.041 [2024-10-12 21:57:31.490129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.041 [2024-10-12 21:57:31.490145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.041 [2024-10-12 21:57:31.502741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.041 [2024-10-12 21:57:31.502756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.041 [2024-10-12 21:57:31.516016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.041 [2024-10-12 21:57:31.516031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.041 [2024-10-12 21:57:31.529045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.041 [2024-10-12 21:57:31.529061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.542276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.542292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.555264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.555279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.567983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.567998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.581301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.581317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.594210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.594225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.607417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.607432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 [2024-10-12 21:57:31.620672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.620687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 19279.20 IOPS, 150.62 MiB/s [2024-10-12T19:57:31.790Z] [2024-10-12 21:57:31.631479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.301 [2024-10-12 21:57:31.631494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.301 00:09:13.301 Latency(us) 00:09:13.301 [2024-10-12T19:57:31.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.301 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:13.302 Nvme1n1 : 5.01 19281.92 150.64 0.00 0.00 6632.27 2990.08 17039.36 00:09:13.302 [2024-10-12T19:57:31.791Z] =================================================================================================================== 00:09:13.302 [2024-10-12T19:57:31.791Z] Total : 19281.92 150.64 0.00 0.00 6632.27 2990.08 17039.36 00:09:13.302 [2024-10-12 21:57:31.642495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.642510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.654503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.654515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.666529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.666542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.678560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.678574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.690587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.690597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.702616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.702626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.714652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.714663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.726684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.726696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 [2024-10-12 21:57:31.738708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.302 [2024-10-12 21:57:31.738717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3336576) - No such process 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3336576 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.302 delay0 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.302 21:57:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:13.562 [2024-10-12 21:57:31.854080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:21.705 Initializing NVMe Controllers 00:09:21.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:21.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:21.705 Initialization complete. Launching workers. 00:09:21.705 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 8627 00:09:21.705 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8913, failed to submit 34 00:09:21.705 success 8706, unsuccessful 207, failed 0 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.705 21:57:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.705 rmmod nvme_tcp 00:09:21.705 rmmod nvme_fabrics 00:09:21.705 rmmod nvme_keyring 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3334388 ']' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3334388 ']' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3334388' 00:09:21.705 killing process with pid 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3334388 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.705 21:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.092 00:09:23.092 real 0m34.566s 00:09:23.092 user 0m45.871s 00:09:23.092 sys 0m11.518s 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.092 ************************************ 00:09:23.092 END TEST nvmf_zcopy 00:09:23.092 ************************************ 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.092 ************************************ 00:09:23.092 START TEST nvmf_nmic 00:09:23.092 ************************************ 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:23.092 * Looking for test storage... 00:09:23.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.092 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:23.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.354 --rc genhtml_branch_coverage=1 00:09:23.354 --rc genhtml_function_coverage=1 00:09:23.354 --rc genhtml_legend=1 00:09:23.354 --rc geninfo_all_blocks=1 00:09:23.354 --rc geninfo_unexecuted_blocks=1 00:09:23.354 00:09:23.354 ' 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:23.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.354 --rc genhtml_branch_coverage=1 00:09:23.354 --rc genhtml_function_coverage=1 00:09:23.354 --rc genhtml_legend=1 00:09:23.354 --rc geninfo_all_blocks=1 00:09:23.354 --rc geninfo_unexecuted_blocks=1 00:09:23.354 00:09:23.354 ' 00:09:23.354 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:23.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.354 --rc genhtml_branch_coverage=1 00:09:23.354 --rc genhtml_function_coverage=1 00:09:23.355 --rc genhtml_legend=1 00:09:23.355 --rc geninfo_all_blocks=1 00:09:23.355 --rc geninfo_unexecuted_blocks=1 00:09:23.355 00:09:23.355 ' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:23.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.355 --rc genhtml_branch_coverage=1 00:09:23.355 --rc genhtml_function_coverage=1 00:09:23.355 --rc genhtml_legend=1 00:09:23.355 --rc geninfo_all_blocks=1 00:09:23.355 --rc geninfo_unexecuted_blocks=1 00:09:23.355 00:09:23.355 ' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.355 21:57:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:31.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:31.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.499 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:31.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:31.500 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.500 21:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:09:31.500 00:09:31.500 --- 10.0.0.2 ping statistics --- 00:09:31.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.500 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:09:31.500 00:09:31.500 --- 10.0.0.1 ping statistics --- 00:09:31.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.500 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3343496 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3343496 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3343496 ']' 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.500 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.500 [2024-10-12 21:57:49.167012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:31.500 [2024-10-12 21:57:49.167077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.500 [2024-10-12 21:57:49.254054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.500 [2024-10-12 21:57:49.304089] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.500 [2024-10-12 21:57:49.304148] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.500 [2024-10-12 21:57:49.304157] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.500 [2024-10-12 21:57:49.304164] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.500 [2024-10-12 21:57:49.304170] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.500 [2024-10-12 21:57:49.304270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.500 [2024-10-12 21:57:49.304426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.500 [2024-10-12 21:57:49.304583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.500 [2024-10-12 21:57:49.304583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.762 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.762 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:31.762 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:31.762 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.762 21:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 [2024-10-12 21:57:50.046350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 Malloc0 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 [2024-10-12 21:57:50.113140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:31.762 test case1: single bdev can't be used in multiple subsystems 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 [2024-10-12 21:57:50.148987] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:31.762 [2024-10-12 21:57:50.149015] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:31.762 [2024-10-12 21:57:50.149023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.762 request: 00:09:31.762 { 00:09:31.762 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:31.762 "namespace": { 00:09:31.762 "bdev_name": "Malloc0", 00:09:31.762 "no_auto_visible": false 00:09:31.762 }, 00:09:31.762 "method": "nvmf_subsystem_add_ns", 00:09:31.762 "req_id": 1 00:09:31.762 } 00:09:31.762 Got JSON-RPC error response 00:09:31.762 response: 00:09:31.762 { 00:09:31.762 "code": -32602, 00:09:31.762 "message": "Invalid parameters" 00:09:31.762 } 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:31.762 Adding namespace failed - expected result. 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:31.762 test case2: host connect to nvmf target in multiple paths 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.762 [2024-10-12 21:57:50.161190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.762 21:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.676 21:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:35.059 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.059 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:35.059 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.059 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:35.059 21:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:36.999 21:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:36.999 [global] 00:09:36.999 thread=1 00:09:36.999 invalidate=1 00:09:36.999 rw=write 00:09:36.999 time_based=1 00:09:36.999 runtime=1 00:09:36.999 ioengine=libaio 00:09:36.999 direct=1 00:09:36.999 bs=4096 00:09:36.999 iodepth=1 00:09:36.999 norandommap=0 00:09:36.999 numjobs=1 00:09:36.999 00:09:36.999 verify_dump=1 00:09:36.999 verify_backlog=512 00:09:36.999 verify_state_save=0 00:09:36.999 do_verify=1 00:09:36.999 verify=crc32c-intel 00:09:36.999 [job0] 00:09:36.999 filename=/dev/nvme0n1 00:09:36.999 Could not set queue depth (nvme0n1) 00:09:37.267 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.267 fio-3.35 00:09:37.267 Starting 1 thread 00:09:38.653 00:09:38.653 job0: (groupid=0, jobs=1): err= 0: pid=3344827: Sat Oct 12 21:57:56 2024 00:09:38.653 read: IOPS=18, BW=73.4KiB/s (75.1kB/s)(76.0KiB/1036msec) 00:09:38.653 slat (nsec): min=10197, max=26878, avg=24672.21, stdev=3555.85 00:09:38.653 clat (usec): min=949, max=42945, avg=39784.61, stdev=9414.04 00:09:38.653 lat (usec): min=959, max=42970, avg=39809.28, stdev=9417.53 00:09:38.653 clat percentiles (usec): 00:09:38.653 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41681], 00:09:38.653 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:38.653 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:09:38.653 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:38.653 | 99.99th=[42730] 00:09:38.653 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:38.653 slat (nsec): min=9525, max=67163, avg=29713.88, stdev=9487.00 00:09:38.653 clat (usec): min=200, max=719, avg=509.65, stdev=92.35 00:09:38.653 lat (usec): min=209, max=766, avg=539.36, stdev=96.09 00:09:38.653 clat percentiles (usec): 00:09:38.653 | 1.00th=[ 285], 5.00th=[ 338], 10.00th=[ 400], 20.00th=[ 424], 00:09:38.653 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 529], 00:09:38.653 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 627], 95.00th=[ 644], 00:09:38.653 | 99.00th=[ 693], 99.50th=[ 717], 99.90th=[ 717], 99.95th=[ 717], 00:09:38.653 | 99.99th=[ 717] 00:09:38.653 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:38.653 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:38.653 lat (usec) : 250=0.56%, 500=45.76%, 750=50.09%, 1000=0.19% 00:09:38.653 lat (msec) : 50=3.39% 00:09:38.653 cpu : usr=0.58%, sys=1.64%, ctx=532, majf=0, minf=1 00:09:38.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:38.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.653 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:38.653 00:09:38.653 Run status group 0 (all jobs): 00:09:38.653 READ: bw=73.4KiB/s (75.1kB/s), 73.4KiB/s-73.4KiB/s (75.1kB/s-75.1kB/s), io=76.0KiB (77.8kB), run=1036-1036msec 00:09:38.653 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:09:38.653 00:09:38.653 Disk stats (read/write): 00:09:38.653 nvme0n1: ios=65/512, merge=0/0, ticks=725/245, in_queue=970, util=97.29% 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.653 rmmod nvme_tcp 00:09:38.653 rmmod nvme_fabrics 00:09:38.653 rmmod nvme_keyring 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3343496 ']' 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3343496 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3343496 ']' 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3343496 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.653 21:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343496 00:09:38.653 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.653 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.654 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343496' 00:09:38.654 killing process with pid 3343496 00:09:38.654 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3343496 00:09:38.654 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3343496 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.915 21:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.829 00:09:40.829 real 0m17.857s 00:09:40.829 user 0m51.395s 00:09:40.829 sys 0m6.583s 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:40.829 ************************************ 00:09:40.829 END TEST nvmf_nmic 00:09:40.829 ************************************ 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.829 21:57:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.091 ************************************ 00:09:41.091 START TEST nvmf_fio_target 00:09:41.091 ************************************ 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:41.091 * Looking for test storage... 00:09:41.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.091 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.092 --rc genhtml_branch_coverage=1 00:09:41.092 --rc genhtml_function_coverage=1 00:09:41.092 --rc genhtml_legend=1 00:09:41.092 --rc geninfo_all_blocks=1 00:09:41.092 --rc geninfo_unexecuted_blocks=1 00:09:41.092 00:09:41.092 ' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.092 --rc genhtml_branch_coverage=1 00:09:41.092 --rc genhtml_function_coverage=1 00:09:41.092 --rc genhtml_legend=1 00:09:41.092 --rc geninfo_all_blocks=1 00:09:41.092 --rc geninfo_unexecuted_blocks=1 00:09:41.092 00:09:41.092 ' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.092 --rc genhtml_branch_coverage=1 00:09:41.092 --rc genhtml_function_coverage=1 00:09:41.092 --rc genhtml_legend=1 00:09:41.092 --rc geninfo_all_blocks=1 00:09:41.092 --rc geninfo_unexecuted_blocks=1 00:09:41.092 00:09:41.092 ' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.092 --rc genhtml_branch_coverage=1 00:09:41.092 --rc genhtml_function_coverage=1 00:09:41.092 --rc genhtml_legend=1 00:09:41.092 --rc geninfo_all_blocks=1 00:09:41.092 --rc geninfo_unexecuted_blocks=1 00:09:41.092 00:09:41.092 ' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.092 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.354 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.354 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:41.354 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:41.354 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.354 21:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.501 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:49.502 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:49.502 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:49.502 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:49.502 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.502 21:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:49.502 00:09:49.502 --- 10.0.0.2 ping statistics --- 00:09:49.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.502 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:09:49.502 00:09:49.502 --- 10.0.0.1 ping statistics --- 00:09:49.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.502 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3349475 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3349475 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3349475 ']' 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.502 [2024-10-12 21:58:07.163261] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:49.502 [2024-10-12 21:58:07.163331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.502 [2024-10-12 21:58:07.251217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.502 [2024-10-12 21:58:07.299253] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.502 [2024-10-12 21:58:07.299305] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.502 [2024-10-12 21:58:07.299312] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.502 [2024-10-12 21:58:07.299319] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.502 [2024-10-12 21:58:07.299326] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.502 [2024-10-12 21:58:07.299475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.502 [2024-10-12 21:58:07.299630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.502 [2024-10-12 21:58:07.299787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.502 [2024-10-12 21:58:07.299789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.502 21:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.763 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.763 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.763 [2024-10-12 21:58:08.192087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.764 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.025 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:50.025 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.285 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:50.285 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.546 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:50.546 21:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.807 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:50.807 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:50.807 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.069 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:51.069 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.331 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:51.331 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.591 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:51.591 21:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:51.852 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.852 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:51.852 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.113 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:52.113 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.374 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.374 [2024-10-12 21:58:10.778434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.374 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:52.635 21:58:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:52.896 21:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:54.280 21:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:56.824 21:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:56.824 [global] 00:09:56.824 thread=1 00:09:56.824 invalidate=1 00:09:56.824 rw=write 00:09:56.825 time_based=1 00:09:56.825 runtime=1 00:09:56.825 ioengine=libaio 00:09:56.825 direct=1 00:09:56.825 bs=4096 00:09:56.825 iodepth=1 00:09:56.825 norandommap=0 00:09:56.825 numjobs=1 00:09:56.825 00:09:56.825 verify_dump=1 00:09:56.825 verify_backlog=512 00:09:56.825 verify_state_save=0 00:09:56.825 do_verify=1 00:09:56.825 verify=crc32c-intel 00:09:56.825 [job0] 00:09:56.825 filename=/dev/nvme0n1 00:09:56.825 [job1] 00:09:56.825 filename=/dev/nvme0n2 00:09:56.825 [job2] 00:09:56.825 filename=/dev/nvme0n3 00:09:56.825 [job3] 00:09:56.825 filename=/dev/nvme0n4 00:09:56.825 Could not set queue depth (nvme0n1) 00:09:56.825 Could not set queue depth (nvme0n2) 00:09:56.825 Could not set queue depth (nvme0n3) 00:09:56.825 Could not set queue depth (nvme0n4) 00:09:56.825 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.825 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.825 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.825 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:56.825 fio-3.35 00:09:56.825 Starting 4 threads 00:09:58.239 00:09:58.239 job0: (groupid=0, jobs=1): err= 0: pid=3351307: Sat Oct 12 21:58:16 2024 00:09:58.239 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:58.239 slat (nsec): min=6706, max=58383, avg=26430.87, stdev=2479.05 00:09:58.239 clat (usec): min=701, max=2012, avg=955.07, stdev=73.63 00:09:58.239 lat (usec): min=708, max=2038, avg=981.51, stdev=73.86 00:09:58.239 clat percentiles (usec): 00:09:58.239 | 1.00th=[ 791], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 914], 00:09:58.239 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 971], 00:09:58.239 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1037], 00:09:58.239 | 99.00th=[ 1090], 99.50th=[ 1106], 99.90th=[ 2008], 99.95th=[ 2008], 00:09:58.239 | 99.99th=[ 2008] 00:09:58.239 write: IOPS=824, BW=3297KiB/s (3376kB/s)(3300KiB/1001msec); 0 zone resets 00:09:58.239 slat (nsec): min=9098, max=52254, avg=29330.30, stdev=9894.41 00:09:58.239 clat (usec): min=216, max=834, avg=561.80, stdev=117.98 00:09:58.239 lat (usec): min=226, max=868, avg=591.13, stdev=123.58 00:09:58.239 clat percentiles (usec): 00:09:58.239 | 1.00th=[ 269], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 461], 00:09:58.239 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 603], 00:09:58.239 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:09:58.239 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 832], 00:09:58.239 | 99.99th=[ 832] 00:09:58.239 bw ( KiB/s): min= 4096, max= 4096, per=39.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.239 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.239 lat (usec) : 250=0.45%, 500=16.75%, 750=42.71%, 1000=33.96% 00:09:58.239 lat (msec) : 2=6.06%, 4=0.07% 00:09:58.239 cpu : usr=4.20%, sys=3.60%, ctx=1337, majf=0, minf=1 00:09:58.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.239 issued rwts: total=512,825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.239 job1: (groupid=0, jobs=1): err= 0: pid=3351328: Sat Oct 12 21:58:16 2024 00:09:58.239 read: IOPS=97, BW=390KiB/s (400kB/s)(392KiB/1004msec) 00:09:58.239 slat (nsec): min=24455, max=61053, avg=25861.61, stdev=5130.96 00:09:58.239 clat (usec): min=712, max=42836, avg=6955.32, stdev=14406.93 00:09:58.239 lat (usec): min=737, max=42861, avg=6981.18, stdev=14407.60 00:09:58.239 clat percentiles (usec): 00:09:58.239 | 1.00th=[ 709], 5.00th=[ 898], 10.00th=[ 938], 20.00th=[ 988], 00:09:58.239 | 30.00th=[ 1045], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1172], 00:09:58.239 | 70.00th=[ 1237], 80.00th=[ 1336], 90.00th=[42206], 95.00th=[42206], 00:09:58.239 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:58.239 | 99.99th=[42730] 00:09:58.239 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:58.239 slat (nsec): min=9591, max=64274, avg=31144.20, stdev=7579.17 00:09:58.239 clat (usec): min=135, max=1048, avg=583.71, stdev=147.46 00:09:58.239 lat (usec): min=168, max=1080, avg=614.86, stdev=148.48 00:09:58.239 clat percentiles (usec): 00:09:58.239 | 1.00th=[ 265], 5.00th=[ 334], 10.00th=[ 388], 20.00th=[ 457], 00:09:58.239 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 627], 00:09:58.239 | 70.00th=[ 660], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 816], 00:09:58.239 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1045], 99.95th=[ 1045], 00:09:58.239 | 99.99th=[ 1045] 00:09:58.239 bw ( KiB/s): min= 4096, max= 4096, per=39.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.239 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.240 lat (usec) : 250=0.66%, 500=22.79%, 750=48.85%, 1000=15.08% 00:09:58.240 lat (msec) : 2=10.33%, 50=2.30% 00:09:58.240 cpu : usr=1.00%, sys=1.69%, ctx=610, majf=0, minf=1 00:09:58.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.240 job2: (groupid=0, jobs=1): err= 0: pid=3351350: Sat Oct 12 21:58:16 2024 00:09:58.240 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:09:58.240 slat (nsec): min=20122, max=26321, avg=25665.65, stdev=1435.27 00:09:58.240 clat (usec): min=1169, max=43017, avg=39660.27, stdev=9922.64 00:09:58.240 lat (usec): min=1195, max=43043, avg=39685.94, stdev=9922.57 00:09:58.240 clat percentiles (usec): 00:09:58.240 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41681], 20.00th=[41681], 00:09:58.240 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:58.240 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:09:58.240 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:58.240 | 99.99th=[43254] 00:09:58.240 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:58.240 slat (nsec): min=10414, max=54221, avg=32916.71, stdev=7505.95 00:09:58.240 clat (usec): min=225, max=934, avg=602.30, stdev=125.96 00:09:58.240 lat (usec): min=237, max=968, avg=635.21, stdev=127.63 00:09:58.240 clat percentiles (usec): 00:09:58.240 | 1.00th=[ 310], 5.00th=[ 371], 10.00th=[ 441], 20.00th=[ 498], 00:09:58.240 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:09:58.240 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:09:58.240 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 938], 00:09:58.240 | 99.99th=[ 938] 00:09:58.240 bw ( KiB/s): min= 4096, max= 4096, per=39.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.240 lat (usec) : 250=0.38%, 500=19.66%, 750=65.22%, 1000=11.53% 00:09:58.240 lat (msec) : 2=0.19%, 50=3.02% 00:09:58.240 cpu : usr=0.40%, sys=1.99%, ctx=531, majf=0, minf=1 00:09:58.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.240 job3: (groupid=0, jobs=1): err= 0: pid=3351356: Sat Oct 12 21:58:16 2024 00:09:58.240 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:58.240 slat (nsec): min=24765, max=43456, avg=25594.90, stdev=1682.02 00:09:58.240 clat (usec): min=759, max=1195, avg=984.02, stdev=79.96 00:09:58.240 lat (usec): min=784, max=1220, avg=1009.62, stdev=79.90 00:09:58.240 clat percentiles (usec): 00:09:58.240 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 922], 00:09:58.240 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:58.240 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:58.240 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:09:58.240 | 99.99th=[ 1188] 00:09:58.240 write: IOPS=754, BW=3017KiB/s (3089kB/s)(3020KiB/1001msec); 0 zone resets 00:09:58.240 slat (nsec): min=9567, max=66410, avg=28789.73, stdev=9475.01 00:09:58.240 clat (usec): min=228, max=830, avg=599.04, stdev=106.35 00:09:58.240 lat (usec): min=239, max=869, avg=627.83, stdev=110.94 00:09:58.240 clat percentiles (usec): 00:09:58.240 | 1.00th=[ 347], 5.00th=[ 379], 10.00th=[ 449], 20.00th=[ 519], 00:09:58.240 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 635], 00:09:58.240 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:09:58.240 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 832], 99.95th=[ 832], 00:09:58.240 | 99.99th=[ 832] 00:09:58.240 bw ( KiB/s): min= 4096, max= 4096, per=39.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:58.240 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:58.240 lat (usec) : 250=0.08%, 500=11.05%, 750=45.38%, 1000=24.39% 00:09:58.240 lat (msec) : 2=19.10% 00:09:58.240 cpu : usr=1.40%, sys=4.00%, ctx=1267, majf=0, minf=1 00:09:58.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.240 issued rwts: total=512,755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.240 00:09:58.240 Run status group 0 (all jobs): 00:09:58.240 READ: bw=4538KiB/s (4647kB/s), 67.7KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=4556KiB (4665kB), run=1001-1004msec 00:09:58.240 WRITE: bw=10.1MiB/s (10.6MB/s), 2040KiB/s-3297KiB/s (2089kB/s-3376kB/s), io=10.2MiB (10.7MB), run=1001-1004msec 00:09:58.240 00:09:58.240 Disk stats (read/write): 00:09:58.240 nvme0n1: ios=562/546, merge=0/0, ticks=853/228, in_queue=1081, util=91.38% 00:09:58.240 nvme0n2: ios=116/512, merge=0/0, ticks=554/276, in_queue=830, util=87.54% 00:09:58.240 nvme0n3: ios=34/512, merge=0/0, ticks=1380/282, in_queue=1662, util=96.61% 00:09:58.240 nvme0n4: ios=499/512, merge=0/0, ticks=491/293, in_queue=784, util=89.50% 00:09:58.240 21:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:58.240 [global] 00:09:58.240 thread=1 00:09:58.240 invalidate=1 00:09:58.240 rw=randwrite 00:09:58.240 time_based=1 00:09:58.240 runtime=1 00:09:58.240 ioengine=libaio 00:09:58.240 direct=1 00:09:58.240 bs=4096 00:09:58.240 iodepth=1 00:09:58.240 norandommap=0 00:09:58.240 numjobs=1 00:09:58.240 00:09:58.240 verify_dump=1 00:09:58.240 verify_backlog=512 00:09:58.240 verify_state_save=0 00:09:58.240 do_verify=1 00:09:58.240 verify=crc32c-intel 00:09:58.240 [job0] 00:09:58.240 filename=/dev/nvme0n1 00:09:58.240 [job1] 00:09:58.240 filename=/dev/nvme0n2 00:09:58.240 [job2] 00:09:58.240 filename=/dev/nvme0n3 00:09:58.240 [job3] 00:09:58.240 filename=/dev/nvme0n4 00:09:58.240 Could not set queue depth (nvme0n1) 00:09:58.240 Could not set queue depth (nvme0n2) 00:09:58.240 Could not set queue depth (nvme0n3) 00:09:58.240 Could not set queue depth (nvme0n4) 00:09:58.503 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.503 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.503 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.503 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.503 fio-3.35 00:09:58.503 Starting 4 threads 00:09:59.899 00:09:59.899 job0: (groupid=0, jobs=1): err= 0: pid=3351768: Sat Oct 12 21:58:18 2024 00:09:59.899 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:59.899 slat (nsec): min=7127, max=50694, avg=26534.59, stdev=2700.35 00:09:59.899 clat (usec): min=562, max=1103, avg=951.63, stdev=60.13 00:09:59.899 lat (usec): min=588, max=1129, avg=978.17, stdev=59.77 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 734], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 930], 00:09:59.899 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:09:59.899 | 70.00th=[ 979], 80.00th=[ 988], 90.00th=[ 1004], 95.00th=[ 1029], 00:09:59.899 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:59.899 | 99.99th=[ 1106] 00:09:59.899 write: IOPS=800, BW=3201KiB/s (3278kB/s)(3204KiB/1001msec); 0 zone resets 00:09:59.899 slat (nsec): min=8920, max=64964, avg=30412.81, stdev=8745.88 00:09:59.899 clat (usec): min=236, max=879, avg=579.70, stdev=113.37 00:09:59.899 lat (usec): min=252, max=912, avg=610.11, stdev=116.65 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 289], 5.00th=[ 367], 10.00th=[ 429], 20.00th=[ 490], 00:09:59.899 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 611], 00:09:59.899 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 750], 00:09:59.899 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 881], 99.95th=[ 881], 00:09:59.899 | 99.99th=[ 881] 00:09:59.899 bw ( KiB/s): min= 4087, max= 4087, per=44.99%, avg=4087.00, stdev= 0.00, samples=1 00:09:59.899 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:59.899 lat (usec) : 250=0.15%, 500=13.25%, 750=45.24%, 1000=36.63% 00:09:59.899 lat (msec) : 2=4.72% 00:09:59.899 cpu : usr=3.00%, sys=5.00%, ctx=1314, majf=0, minf=1 00:09:59.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 issued rwts: total=512,801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.899 job1: (groupid=0, jobs=1): err= 0: pid=3351783: Sat Oct 12 21:58:18 2024 00:09:59.899 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:09:59.899 slat (nsec): min=26131, max=27008, avg=26458.47, stdev=223.58 00:09:59.899 clat (usec): min=40993, max=42977, avg=41984.70, stdev=367.22 00:09:59.899 lat (usec): min=41020, max=43004, avg=42011.16, stdev=367.31 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:59.899 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:59.899 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:09:59.899 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:59.899 | 99.99th=[42730] 00:09:59.899 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:59.899 slat (nsec): min=8929, max=52889, avg=29933.43, stdev=8831.03 00:09:59.899 clat (usec): min=171, max=947, avg=575.54, stdev=145.92 00:09:59.899 lat (usec): min=182, max=998, avg=605.47, stdev=149.68 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 371], 20.00th=[ 449], 00:09:59.899 | 30.00th=[ 506], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:09:59.899 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 791], 00:09:59.899 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 947], 99.95th=[ 947], 00:09:59.899 | 99.99th=[ 947] 00:09:59.899 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.899 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.899 lat (usec) : 250=0.95%, 500=27.60%, 750=58.60%, 1000=9.64% 00:09:59.899 lat (msec) : 50=3.21% 00:09:59.899 cpu : usr=0.78%, sys=2.24%, ctx=529, majf=0, minf=2 00:09:59.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.899 job2: (groupid=0, jobs=1): err= 0: pid=3351804: Sat Oct 12 21:58:18 2024 00:09:59.899 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:09:59.899 slat (nsec): min=25044, max=25735, avg=25359.24, stdev=187.77 00:09:59.899 clat (usec): min=979, max=43001, avg=39648.49, stdev=9983.53 00:09:59.899 lat (usec): min=1005, max=43026, avg=39673.85, stdev=9983.43 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 979], 5.00th=[ 979], 10.00th=[41157], 20.00th=[41681], 00:09:59.899 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:59.899 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:09:59.899 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:59.899 | 99.99th=[43254] 00:09:59.899 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:09:59.899 slat (nsec): min=9494, max=49501, avg=28494.21, stdev=8944.92 00:09:59.899 clat (usec): min=261, max=874, avg=615.68, stdev=103.75 00:09:59.899 lat (usec): min=272, max=906, avg=644.18, stdev=107.76 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 355], 5.00th=[ 441], 10.00th=[ 469], 20.00th=[ 537], 00:09:59.899 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:09:59.899 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 766], 00:09:59.899 | 99.00th=[ 824], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:09:59.899 | 99.99th=[ 873] 00:09:59.899 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.899 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.899 lat (usec) : 500=14.74%, 750=74.67%, 1000=7.56% 00:09:59.899 lat (msec) : 50=3.02% 00:09:59.899 cpu : usr=0.40%, sys=1.79%, ctx=529, majf=0, minf=1 00:09:59.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.899 job3: (groupid=0, jobs=1): err= 0: pid=3351811: Sat Oct 12 21:58:18 2024 00:09:59.899 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:09:59.899 slat (nsec): min=24879, max=25513, avg=25090.75, stdev=180.45 00:09:59.899 clat (usec): min=40995, max=43008, avg=42028.36, stdev=453.61 00:09:59.899 lat (usec): min=41020, max=43033, avg=42053.45, stdev=453.54 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:59.899 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:59.899 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:09:59.899 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:59.899 | 99.99th=[43254] 00:09:59.899 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:59.899 slat (nsec): min=9507, max=66972, avg=30009.53, stdev=7128.39 00:09:59.899 clat (usec): min=208, max=1089, avg=605.22, stdev=134.86 00:09:59.899 lat (usec): min=239, max=1121, avg=635.23, stdev=136.33 00:09:59.899 clat percentiles (usec): 00:09:59.899 | 1.00th=[ 314], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 486], 00:09:59.899 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 652], 00:09:59.899 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 807], 00:09:59.899 | 99.00th=[ 906], 99.50th=[ 930], 99.90th=[ 1090], 99.95th=[ 1090], 00:09:59.899 | 99.99th=[ 1090] 00:09:59.899 bw ( KiB/s): min= 4096, max= 4096, per=45.09%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.899 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.899 lat (usec) : 250=0.19%, 500=21.97%, 750=61.55%, 1000=12.88% 00:09:59.899 lat (msec) : 2=0.38%, 50=3.03% 00:09:59.899 cpu : usr=1.20%, sys=1.20%, ctx=528, majf=0, minf=1 00:09:59.899 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.899 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.899 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.899 00:09:59.899 Run status group 0 (all jobs): 00:09:59.900 READ: bw=2185KiB/s (2237kB/s), 63.8KiB/s-2046KiB/s (65.3kB/s-2095kB/s), io=2248KiB (2302kB), run=1001-1029msec 00:09:59.900 WRITE: bw=9085KiB/s (9303kB/s), 1990KiB/s-3201KiB/s (2038kB/s-3278kB/s), io=9348KiB (9572kB), run=1001-1029msec 00:09:59.900 00:09:59.900 Disk stats (read/write): 00:09:59.900 nvme0n1: ios=549/525, merge=0/0, ticks=571/229, in_queue=800, util=89.98% 00:09:59.900 nvme0n2: ios=50/512, merge=0/0, ticks=644/234, in_queue=878, util=96.11% 00:09:59.900 nvme0n3: ios=12/512, merge=0/0, ticks=462/299, in_queue=761, util=88.36% 00:09:59.900 nvme0n4: ios=11/512, merge=0/0, ticks=462/297, in_queue=759, util=89.50% 00:09:59.900 21:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:59.900 [global] 00:09:59.900 thread=1 00:09:59.900 invalidate=1 00:09:59.900 rw=write 00:09:59.900 time_based=1 00:09:59.900 runtime=1 00:09:59.900 ioengine=libaio 00:09:59.900 direct=1 00:09:59.900 bs=4096 00:09:59.900 iodepth=128 00:09:59.900 norandommap=0 00:09:59.900 numjobs=1 00:09:59.900 00:09:59.900 verify_dump=1 00:09:59.900 verify_backlog=512 00:09:59.900 verify_state_save=0 00:09:59.900 do_verify=1 00:09:59.900 verify=crc32c-intel 00:09:59.900 [job0] 00:09:59.900 filename=/dev/nvme0n1 00:09:59.900 [job1] 00:09:59.900 filename=/dev/nvme0n2 00:09:59.900 [job2] 00:09:59.900 filename=/dev/nvme0n3 00:09:59.900 [job3] 00:09:59.900 filename=/dev/nvme0n4 00:09:59.900 Could not set queue depth (nvme0n1) 00:09:59.900 Could not set queue depth (nvme0n2) 00:09:59.900 Could not set queue depth (nvme0n3) 00:09:59.900 Could not set queue depth (nvme0n4) 00:10:00.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.221 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.221 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.221 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.221 fio-3.35 00:10:00.221 Starting 4 threads 00:10:01.690 00:10:01.690 job0: (groupid=0, jobs=1): err= 0: pid=3352317: Sat Oct 12 21:58:19 2024 00:10:01.690 read: IOPS=5718, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec) 00:10:01.690 slat (nsec): min=921, max=9371.1k, avg=78980.19, stdev=598955.10 00:10:01.690 clat (usec): min=1788, max=21432, avg=9975.26, stdev=2883.78 00:10:01.690 lat (usec): min=3453, max=23033, avg=10054.24, stdev=2931.73 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 7242], 00:10:01.690 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10028], 00:10:01.690 | 70.00th=[11469], 80.00th=[12911], 90.00th=[14091], 95.00th=[15401], 00:10:01.690 | 99.00th=[16581], 99.50th=[17957], 99.90th=[21103], 99.95th=[21103], 00:10:01.690 | 99.99th=[21365] 00:10:01.690 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:10:01.690 slat (nsec): min=1632, max=10813k, avg=84087.10, stdev=503948.32 00:10:01.690 clat (usec): min=1138, max=34956, avg=11439.51, stdev=6891.88 00:10:01.690 lat (usec): min=1147, max=34960, avg=11523.60, stdev=6942.11 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 3523], 5.00th=[ 4555], 10.00th=[ 5407], 20.00th=[ 6456], 00:10:01.690 | 30.00th=[ 7504], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:10:01.690 | 70.00th=[11600], 80.00th=[13829], 90.00th=[23987], 95.00th=[27395], 00:10:01.690 | 99.00th=[32113], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:10:01.690 | 99.99th=[34866] 00:10:01.690 bw ( KiB/s): min=24568, max=24576, per=23.69%, avg=24572.00, stdev= 5.66, samples=2 00:10:01.690 iops : min= 6142, max= 6144, avg=6143.00, stdev= 1.41, samples=2 00:10:01.690 lat (msec) : 2=0.03%, 4=1.33%, 10=59.84%, 20=30.77%, 50=8.03% 00:10:01.690 cpu : usr=3.48%, sys=7.36%, ctx=516, majf=0, minf=1 00:10:01.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:01.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.690 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.690 job1: (groupid=0, jobs=1): err= 0: pid=3352330: Sat Oct 12 21:58:19 2024 00:10:01.690 read: IOPS=7000, BW=27.3MiB/s (28.7MB/s)(27.5MiB/1005msec) 00:10:01.690 slat (nsec): min=905, max=10768k, avg=70236.65, stdev=496487.27 00:10:01.690 clat (usec): min=1446, max=48638, avg=8959.71, stdev=4418.48 00:10:01.690 lat (usec): min=1597, max=48644, avg=9029.95, stdev=4449.84 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 3195], 5.00th=[ 4080], 10.00th=[ 5276], 20.00th=[ 6849], 00:10:01.690 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8586], 00:10:01.690 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[11600], 95.00th=[16909], 00:10:01.690 | 99.00th=[30802], 99.50th=[31851], 99.90th=[33424], 99.95th=[40633], 00:10:01.690 | 99.99th=[48497] 00:10:01.690 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:10:01.690 slat (nsec): min=1602, max=9921.8k, avg=62696.96, stdev=381484.47 00:10:01.690 clat (usec): min=494, max=61238, avg=8994.79, stdev=6569.02 00:10:01.690 lat (usec): min=673, max=61249, avg=9057.48, stdev=6612.23 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 1647], 5.00th=[ 3458], 10.00th=[ 5014], 20.00th=[ 6718], 00:10:01.690 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8225], 00:10:01.690 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[12256], 95.00th=[16319], 00:10:01.690 | 99.00th=[44827], 99.50th=[52691], 99.90th=[59507], 99.95th=[61080], 00:10:01.690 | 99.99th=[61080] 00:10:01.690 bw ( KiB/s): min=28614, max=28672, per=27.62%, avg=28643.00, stdev=41.01, samples=2 00:10:01.690 iops : min= 7153, max= 7168, avg=7160.50, stdev=10.61, samples=2 00:10:01.690 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.03% 00:10:01.690 lat (msec) : 2=0.66%, 4=5.14%, 10=74.32%, 20=16.19%, 50=3.20% 00:10:01.690 lat (msec) : 100=0.39% 00:10:01.690 cpu : usr=5.38%, sys=7.17%, ctx=680, majf=0, minf=1 00:10:01.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:01.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.690 issued rwts: total=7035,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.690 job2: (groupid=0, jobs=1): err= 0: pid=3352349: Sat Oct 12 21:58:19 2024 00:10:01.690 read: IOPS=7031, BW=27.5MiB/s (28.8MB/s)(27.6MiB/1005msec) 00:10:01.690 slat (nsec): min=1056, max=8633.6k, avg=73301.76, stdev=550363.06 00:10:01.690 clat (usec): min=2438, max=17933, avg=9360.37, stdev=2316.50 00:10:01.690 lat (usec): min=2758, max=17944, avg=9433.67, stdev=2352.15 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 4293], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7701], 00:10:01.690 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:01.690 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[12518], 95.00th=[14091], 00:10:01.690 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:10:01.690 | 99.99th=[17957] 00:10:01.690 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:10:01.690 slat (nsec): min=1743, max=33318k, avg=61909.55, stdev=509144.64 00:10:01.690 clat (usec): min=753, max=34713, avg=7953.60, stdev=2088.74 00:10:01.690 lat (usec): min=762, max=34762, avg=8015.51, stdev=2124.82 00:10:01.690 clat percentiles (usec): 00:10:01.690 | 1.00th=[ 2278], 5.00th=[ 3949], 10.00th=[ 5473], 20.00th=[ 6718], 00:10:01.691 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8586], 00:10:01.691 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10159], 00:10:01.691 | 99.00th=[13698], 99.50th=[13698], 99.90th=[17433], 99.95th=[34866], 00:10:01.691 | 99.99th=[34866] 00:10:01.691 bw ( KiB/s): min=28656, max=28688, per=27.64%, avg=28672.00, stdev=22.63, samples=2 00:10:01.691 iops : min= 7164, max= 7172, avg=7168.00, stdev= 5.66, samples=2 00:10:01.691 lat (usec) : 1000=0.02% 00:10:01.691 lat (msec) : 2=0.27%, 4=2.62%, 10=79.88%, 20=17.18%, 50=0.03% 00:10:01.691 cpu : usr=3.88%, sys=7.17%, ctx=765, majf=0, minf=2 00:10:01.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:01.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.691 issued rwts: total=7067,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.691 job3: (groupid=0, jobs=1): err= 0: pid=3352357: Sat Oct 12 21:58:19 2024 00:10:01.691 read: IOPS=5563, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1002msec) 00:10:01.691 slat (nsec): min=933, max=20378k, avg=98278.46, stdev=808774.83 00:10:01.691 clat (usec): min=909, max=53733, avg=12317.76, stdev=6379.38 00:10:01.691 lat (usec): min=2977, max=53760, avg=12416.04, stdev=6446.12 00:10:01.691 clat percentiles (usec): 00:10:01.691 | 1.00th=[ 4555], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 8717], 00:10:01.691 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11076], 00:10:01.691 | 70.00th=[12518], 80.00th=[14222], 90.00th=[20841], 95.00th=[24773], 00:10:01.691 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[44303], 00:10:01.691 | 99.99th=[53740] 00:10:01.691 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:01.691 slat (nsec): min=1590, max=18820k, avg=67470.46, stdev=545976.73 00:10:01.691 clat (usec): min=1234, max=49681, avg=10378.39, stdev=7205.69 00:10:01.691 lat (usec): min=1245, max=49704, avg=10445.86, stdev=7261.56 00:10:01.691 clat percentiles (usec): 00:10:01.691 | 1.00th=[ 2966], 5.00th=[ 4293], 10.00th=[ 4883], 20.00th=[ 5932], 00:10:01.691 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9110], 00:10:01.691 | 70.00th=[ 9372], 80.00th=[11338], 90.00th=[19006], 95.00th=[28705], 00:10:01.691 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39584], 99.95th=[45876], 00:10:01.691 | 99.99th=[49546] 00:10:01.691 bw ( KiB/s): min=25768, max=25768, per=24.84%, avg=25768.00, stdev= 0.00, samples=1 00:10:01.691 iops : min= 6442, max= 6442, avg=6442.00, stdev= 0.00, samples=1 00:10:01.691 lat (usec) : 1000=0.01% 00:10:01.691 lat (msec) : 2=0.08%, 4=2.09%, 10=59.20%, 20=28.46%, 50=10.15% 00:10:01.691 lat (msec) : 100=0.02% 00:10:01.691 cpu : usr=3.90%, sys=6.09%, ctx=492, majf=0, minf=1 00:10:01.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:01.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.691 issued rwts: total=5575,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.691 00:10:01.691 Run status group 0 (all jobs): 00:10:01.691 READ: bw=98.7MiB/s (103MB/s), 21.7MiB/s-27.5MiB/s (22.8MB/s-28.8MB/s), io=99.4MiB (104MB), run=1002-1007msec 00:10:01.691 WRITE: bw=101MiB/s (106MB/s), 22.0MiB/s-27.9MiB/s (23.0MB/s-29.2MB/s), io=102MiB (107MB), run=1002-1007msec 00:10:01.691 00:10:01.691 Disk stats (read/write): 00:10:01.691 nvme0n1: ios=4263/4608, merge=0/0, ticks=43483/58702, in_queue=102185, util=96.09% 00:10:01.691 nvme0n2: ios=6184/6423, merge=0/0, ticks=31522/27214, in_queue=58736, util=88.58% 00:10:01.691 nvme0n3: ios=5668/6055, merge=0/0, ticks=51709/46761, in_queue=98470, util=98.21% 00:10:01.691 nvme0n4: ios=4608/5030, merge=0/0, ticks=43635/37107, in_queue=80742, util=88.05% 00:10:01.691 21:58:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:01.691 [global] 00:10:01.691 thread=1 00:10:01.691 invalidate=1 00:10:01.691 rw=randwrite 00:10:01.691 time_based=1 00:10:01.691 runtime=1 00:10:01.691 ioengine=libaio 00:10:01.691 direct=1 00:10:01.691 bs=4096 00:10:01.691 iodepth=128 00:10:01.691 norandommap=0 00:10:01.691 numjobs=1 00:10:01.691 00:10:01.691 verify_dump=1 00:10:01.691 verify_backlog=512 00:10:01.691 verify_state_save=0 00:10:01.691 do_verify=1 00:10:01.691 verify=crc32c-intel 00:10:01.691 [job0] 00:10:01.691 filename=/dev/nvme0n1 00:10:01.691 [job1] 00:10:01.691 filename=/dev/nvme0n2 00:10:01.691 [job2] 00:10:01.691 filename=/dev/nvme0n3 00:10:01.691 [job3] 00:10:01.691 filename=/dev/nvme0n4 00:10:01.691 Could not set queue depth (nvme0n1) 00:10:01.691 Could not set queue depth (nvme0n2) 00:10:01.691 Could not set queue depth (nvme0n3) 00:10:01.691 Could not set queue depth (nvme0n4) 00:10:01.691 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.691 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.691 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.691 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.691 fio-3.35 00:10:01.691 Starting 4 threads 00:10:03.074 00:10:03.074 job0: (groupid=0, jobs=1): err= 0: pid=3352823: Sat Oct 12 21:58:21 2024 00:10:03.074 read: IOPS=5388, BW=21.0MiB/s (22.1MB/s)(21.1MiB/1002msec) 00:10:03.074 slat (nsec): min=889, max=6027.0k, avg=94860.23, stdev=498555.28 00:10:03.074 clat (usec): min=1155, max=22277, avg=11900.88, stdev=2944.32 00:10:03.074 lat (usec): min=6522, max=22287, avg=11995.74, stdev=2989.78 00:10:03.074 clat percentiles (usec): 00:10:03.074 | 1.00th=[ 7111], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9110], 00:10:03.074 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11469], 60.00th=[12518], 00:10:03.074 | 70.00th=[13435], 80.00th=[14353], 90.00th=[16319], 95.00th=[17433], 00:10:03.074 | 99.00th=[19006], 99.50th=[20317], 99.90th=[21365], 99.95th=[21890], 00:10:03.074 | 99.99th=[22152] 00:10:03.074 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:03.074 slat (nsec): min=1482, max=3638.5k, avg=82870.04, stdev=356379.35 00:10:03.074 clat (usec): min=6199, max=22613, avg=11104.81, stdev=3129.17 00:10:03.074 lat (usec): min=6206, max=22615, avg=11187.68, stdev=3152.11 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 6915], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 8291], 00:10:03.075 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11338], 00:10:03.075 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15401], 95.00th=[17433], 00:10:03.075 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22676], 99.95th=[22676], 00:10:03.075 | 99.99th=[22676] 00:10:03.075 bw ( KiB/s): min=21136, max=23920, per=22.11%, avg=22528.00, stdev=1968.59, samples=2 00:10:03.075 iops : min= 5284, max= 5980, avg=5632.00, stdev=492.15, samples=2 00:10:03.075 lat (msec) : 2=0.01%, 10=42.54%, 20=56.62%, 50=0.82% 00:10:03.075 cpu : usr=2.90%, sys=4.30%, ctx=694, majf=0, minf=1 00:10:03.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:03.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.075 issued rwts: total=5399,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.075 job1: (groupid=0, jobs=1): err= 0: pid=3352830: Sat Oct 12 21:58:21 2024 00:10:03.075 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:03.075 slat (nsec): min=918, max=7072.4k, avg=85974.21, stdev=534065.93 00:10:03.075 clat (usec): min=2566, max=27329, avg=11990.17, stdev=5454.92 00:10:03.075 lat (usec): min=2619, max=27334, avg=12076.15, stdev=5479.07 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 3884], 5.00th=[ 5407], 10.00th=[ 6587], 20.00th=[ 6980], 00:10:03.075 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[10421], 60.00th=[14484], 00:10:03.075 | 70.00th=[15926], 80.00th=[16909], 90.00th=[19006], 95.00th=[21103], 00:10:03.075 | 99.00th=[26346], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:10:03.075 | 99.99th=[27395] 00:10:03.075 write: IOPS=5695, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec); 0 zone resets 00:10:03.075 slat (nsec): min=1609, max=12686k, avg=79494.77, stdev=483746.27 00:10:03.075 clat (usec): min=751, max=38979, avg=10427.89, stdev=5236.71 00:10:03.075 lat (usec): min=1238, max=38995, avg=10507.39, stdev=5260.19 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 3458], 5.00th=[ 4228], 10.00th=[ 5080], 20.00th=[ 6259], 00:10:03.075 | 30.00th=[ 7177], 40.00th=[ 8029], 50.00th=[ 9765], 60.00th=[11731], 00:10:03.075 | 70.00th=[12387], 80.00th=[13435], 90.00th=[14746], 95.00th=[16581], 00:10:03.075 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:10:03.075 | 99.99th=[39060] 00:10:03.075 bw ( KiB/s): min=20480, max=24576, per=22.11%, avg=22528.00, stdev=2896.31, samples=2 00:10:03.075 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:03.075 lat (usec) : 1000=0.01% 00:10:03.075 lat (msec) : 2=0.04%, 4=2.30%, 10=47.39%, 20=44.92%, 50=5.33% 00:10:03.075 cpu : usr=3.80%, sys=6.99%, ctx=439, majf=0, minf=1 00:10:03.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:03.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.075 issued rwts: total=5632,5707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.075 job2: (groupid=0, jobs=1): err= 0: pid=3352849: Sat Oct 12 21:58:21 2024 00:10:03.075 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.1MiB/1007msec) 00:10:03.075 slat (nsec): min=987, max=11323k, avg=73080.28, stdev=550571.70 00:10:03.075 clat (usec): min=3622, max=21421, avg=9374.18, stdev=2315.72 00:10:03.075 lat (usec): min=3630, max=21454, avg=9447.26, stdev=2351.44 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 4113], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7701], 00:10:03.075 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9503], 00:10:03.075 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12780], 95.00th=[14091], 00:10:03.075 | 99.00th=[16188], 99.50th=[17433], 99.90th=[19530], 99.95th=[19792], 00:10:03.075 | 99.99th=[21365] 00:10:03.075 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:10:03.075 slat (nsec): min=1595, max=10497k, avg=56308.26, stdev=403608.45 00:10:03.075 clat (usec): min=1213, max=21697, avg=7873.55, stdev=2015.11 00:10:03.075 lat (usec): min=1225, max=21720, avg=7929.86, stdev=2037.06 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 3425], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 6194], 00:10:03.075 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8225], 00:10:03.075 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[11207], 00:10:03.075 | 99.00th=[12780], 99.50th=[14091], 99.90th=[16712], 99.95th=[19530], 00:10:03.075 | 99.99th=[21627] 00:10:03.075 bw ( KiB/s): min=29128, max=31408, per=29.71%, avg=30268.00, stdev=1612.20, samples=2 00:10:03.075 iops : min= 7282, max= 7852, avg=7567.00, stdev=403.05, samples=2 00:10:03.075 lat (msec) : 2=0.06%, 4=1.42%, 10=76.76%, 20=21.74%, 50=0.02% 00:10:03.075 cpu : usr=5.07%, sys=8.55%, ctx=554, majf=0, minf=1 00:10:03.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:03.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.075 issued rwts: total=7182,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.075 job3: (groupid=0, jobs=1): err= 0: pid=3352857: Sat Oct 12 21:58:21 2024 00:10:03.075 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:03.075 slat (nsec): min=913, max=30462k, avg=73486.86, stdev=618746.86 00:10:03.075 clat (usec): min=1960, max=46014, avg=9844.24, stdev=4973.34 00:10:03.075 lat (usec): min=2000, max=46038, avg=9917.73, stdev=5010.93 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 3654], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7767], 00:10:03.075 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9110], 00:10:03.075 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[13566], 95.00th=[17695], 00:10:03.075 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[43779], 00:10:03.075 | 99.99th=[45876] 00:10:03.075 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(25.9MiB/1005msec); 0 zone resets 00:10:03.075 slat (nsec): min=1497, max=14393k, avg=70344.48, stdev=550272.19 00:10:03.075 clat (usec): min=918, max=36337, avg=10107.62, stdev=6203.82 00:10:03.075 lat (usec): min=926, max=36339, avg=10177.96, stdev=6245.53 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 1549], 5.00th=[ 3982], 10.00th=[ 4555], 20.00th=[ 5604], 00:10:03.075 | 30.00th=[ 6849], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9110], 00:10:03.075 | 70.00th=[10421], 80.00th=[13698], 90.00th=[18220], 95.00th=[25035], 00:10:03.075 | 99.00th=[32637], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:03.075 | 99.99th=[36439] 00:10:03.075 bw ( KiB/s): min=23352, max=28664, per=25.53%, avg=26008.00, stdev=3756.15, samples=2 00:10:03.075 iops : min= 5838, max= 7166, avg=6502.00, stdev=939.04, samples=2 00:10:03.075 lat (usec) : 1000=0.05% 00:10:03.075 lat (msec) : 2=1.16%, 4=2.10%, 10=67.38%, 20=24.36%, 50=4.94% 00:10:03.075 cpu : usr=4.38%, sys=7.37%, ctx=472, majf=0, minf=2 00:10:03.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:03.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.075 issued rwts: total=6144,6629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.075 00:10:03.075 Run status group 0 (all jobs): 00:10:03.075 READ: bw=94.5MiB/s (99.1MB/s), 21.0MiB/s-27.9MiB/s (22.1MB/s-29.2MB/s), io=95.1MiB (99.8MB), run=1002-1007msec 00:10:03.075 WRITE: bw=99.5MiB/s (104MB/s), 22.0MiB/s-29.8MiB/s (23.0MB/s-31.2MB/s), io=100MiB (105MB), run=1002-1007msec 00:10:03.075 00:10:03.075 Disk stats (read/write): 00:10:03.075 nvme0n1: ios=4658/4831, merge=0/0, ticks=17732/16374, in_queue=34106, util=87.98% 00:10:03.075 nvme0n2: ios=4637/4783, merge=0/0, ticks=27037/28749, in_queue=55786, util=96.22% 00:10:03.075 nvme0n3: ios=6066/6144, merge=0/0, ticks=54849/46017, in_queue=100866, util=88.38% 00:10:03.075 nvme0n4: ios=5161/5356, merge=0/0, ticks=33226/38484, in_queue=71710, util=100.00% 00:10:03.075 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:03.075 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3352997 00:10:03.075 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:03.075 21:58:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:03.075 [global] 00:10:03.075 thread=1 00:10:03.075 invalidate=1 00:10:03.075 rw=read 00:10:03.075 time_based=1 00:10:03.075 runtime=10 00:10:03.075 ioengine=libaio 00:10:03.075 direct=1 00:10:03.075 bs=4096 00:10:03.075 iodepth=1 00:10:03.075 norandommap=1 00:10:03.075 numjobs=1 00:10:03.075 00:10:03.075 [job0] 00:10:03.075 filename=/dev/nvme0n1 00:10:03.075 [job1] 00:10:03.075 filename=/dev/nvme0n2 00:10:03.075 [job2] 00:10:03.075 filename=/dev/nvme0n3 00:10:03.075 [job3] 00:10:03.075 filename=/dev/nvme0n4 00:10:03.075 Could not set queue depth (nvme0n1) 00:10:03.075 Could not set queue depth (nvme0n2) 00:10:03.075 Could not set queue depth (nvme0n3) 00:10:03.075 Could not set queue depth (nvme0n4) 00:10:03.335 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.335 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.335 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.335 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.335 fio-3.35 00:10:03.335 Starting 4 threads 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:06.633 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:10:06.633 fio: pid=3353392, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.633 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10153984, buflen=4096 00:10:06.633 fio: pid=3353379, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.633 21:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:06.633 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:10:06.633 fio: pid=3353325, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.894 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=303104, buflen=4096 00:10:06.894 fio: pid=3353352, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:06.894 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.894 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:06.894 00:10:06.894 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3353325: Sat Oct 12 21:58:25 2024 00:10:06.894 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(284KiB/2931msec) 00:10:06.894 slat (usec): min=8, max=15581, avg=243.11, stdev=1833.15 00:10:06.894 clat (usec): min=895, max=43007, avg=40715.49, stdev=6844.94 00:10:06.894 lat (usec): min=934, max=43033, avg=40742.56, stdev=6843.91 00:10:06.894 clat percentiles (usec): 00:10:06.894 | 1.00th=[ 898], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:06.894 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:06.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:06.894 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:06.894 | 99.99th=[43254] 00:10:06.894 bw ( KiB/s): min= 96, max= 104, per=2.80%, avg=97.60, stdev= 3.58, samples=5 00:10:06.895 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:10:06.895 lat (usec) : 1000=2.78% 00:10:06.895 lat (msec) : 50=95.83% 00:10:06.895 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=1 00:10:06.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.895 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3353352: Sat Oct 12 21:58:25 2024 00:10:06.895 read: IOPS=24, BW=95.6KiB/s (97.9kB/s)(296KiB/3097msec) 00:10:06.895 slat (usec): min=22, max=3617, avg=96.58, stdev=455.54 00:10:06.895 clat (usec): min=807, max=43030, avg=41437.58, stdev=4810.63 00:10:06.895 lat (usec): min=847, max=45993, avg=41535.07, stdev=4844.56 00:10:06.895 clat percentiles (usec): 00:10:06.895 | 1.00th=[ 807], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:06.895 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:06.895 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:06.895 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:06.895 | 99.99th=[43254] 00:10:06.895 bw ( KiB/s): min= 90, max= 96, per=2.74%, avg=95.00, stdev= 2.45, samples=6 00:10:06.895 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:10:06.895 lat (usec) : 1000=1.33% 00:10:06.895 lat (msec) : 50=97.33% 00:10:06.895 cpu : usr=0.10%, sys=0.00%, ctx=78, majf=0, minf=2 00:10:06.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.895 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3353379: Sat Oct 12 21:58:25 2024 00:10:06.895 read: IOPS=908, BW=3632KiB/s (3719kB/s)(9916KiB/2730msec) 00:10:06.895 slat (usec): min=6, max=21174, avg=41.48, stdev=519.72 00:10:06.895 clat (usec): min=553, max=1786, avg=1042.82, stdev=82.52 00:10:06.895 lat (usec): min=563, max=22215, avg=1084.31, stdev=526.66 00:10:06.895 clat percentiles (usec): 00:10:06.895 | 1.00th=[ 783], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 988], 00:10:06.895 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:10:06.895 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:10:06.895 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1254], 00:10:06.895 | 99.99th=[ 1795] 00:10:06.895 bw ( KiB/s): min= 3624, max= 3968, per=100.00%, avg=3724.80, stdev=138.17, samples=5 00:10:06.895 iops : min= 906, max= 992, avg=931.20, stdev=34.54, samples=5 00:10:06.895 lat (usec) : 750=0.40%, 1000=25.56% 00:10:06.895 lat (msec) : 2=73.99% 00:10:06.895 cpu : usr=2.20%, sys=3.08%, ctx=2482, majf=0, minf=2 00:10:06.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.895 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3353392: Sat Oct 12 21:58:25 2024 00:10:06.895 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(248KiB/2584msec) 00:10:06.895 slat (nsec): min=26026, max=34285, avg=26638.95, stdev=1047.41 00:10:06.895 clat (usec): min=1075, max=42975, avg=41289.00, stdev=5221.86 00:10:06.895 lat (usec): min=1109, max=43001, avg=41315.64, stdev=5220.88 00:10:06.895 clat percentiles (usec): 00:10:06.895 | 1.00th=[ 1074], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:06.895 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:06.895 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:06.895 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:06.895 | 99.99th=[42730] 00:10:06.895 bw ( KiB/s): min= 96, max= 96, per=2.77%, avg=96.00, stdev= 0.00, samples=5 00:10:06.895 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:06.895 lat (msec) : 2=1.59%, 50=96.83% 00:10:06.895 cpu : usr=0.15%, sys=0.00%, ctx=63, majf=0, minf=2 00:10:06.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.895 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:06.895 00:10:06.895 Run status group 0 (all jobs): 00:10:06.895 READ: bw=3469KiB/s (3552kB/s), 95.6KiB/s-3632KiB/s (97.9kB/s-3719kB/s), io=10.5MiB (11.0MB), run=2584-3097msec 00:10:06.895 00:10:06.895 Disk stats (read/write): 00:10:06.895 nvme0n1: ios=68/0, merge=0/0, ticks=2767/0, in_queue=2767, util=92.92% 00:10:06.895 nvme0n2: ios=72/0, merge=0/0, ticks=2985/0, in_queue=2985, util=94.27% 00:10:06.895 nvme0n3: ios=2354/0, merge=0/0, ticks=2233/0, in_queue=2233, util=95.55% 00:10:06.895 nvme0n4: ios=61/0, merge=0/0, ticks=2521/0, in_queue=2521, util=96.35% 00:10:06.895 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.895 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:07.156 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.156 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:07.416 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.416 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:07.416 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.416 21:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3352997 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:07.676 nvmf hotplug test: fio failed as expected 00:10:07.676 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.938 rmmod nvme_tcp 00:10:07.938 rmmod nvme_fabrics 00:10:07.938 rmmod nvme_keyring 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3349475 ']' 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3349475 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3349475 ']' 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3349475 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.938 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3349475 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3349475' 00:10:08.200 killing process with pid 3349475 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3349475 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3349475 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.200 21:58:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.743 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.744 00:10:10.744 real 0m29.334s 00:10:10.744 user 2m37.324s 00:10:10.744 sys 0m9.494s 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.744 ************************************ 00:10:10.744 END TEST nvmf_fio_target 00:10:10.744 ************************************ 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.744 ************************************ 00:10:10.744 START TEST nvmf_bdevio 00:10:10.744 ************************************ 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:10.744 * Looking for test storage... 00:10:10.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.744 --rc genhtml_branch_coverage=1 00:10:10.744 --rc genhtml_function_coverage=1 00:10:10.744 --rc genhtml_legend=1 00:10:10.744 --rc geninfo_all_blocks=1 00:10:10.744 --rc geninfo_unexecuted_blocks=1 00:10:10.744 00:10:10.744 ' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.744 --rc genhtml_branch_coverage=1 00:10:10.744 --rc genhtml_function_coverage=1 00:10:10.744 --rc genhtml_legend=1 00:10:10.744 --rc geninfo_all_blocks=1 00:10:10.744 --rc geninfo_unexecuted_blocks=1 00:10:10.744 00:10:10.744 ' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.744 --rc genhtml_branch_coverage=1 00:10:10.744 --rc genhtml_function_coverage=1 00:10:10.744 --rc genhtml_legend=1 00:10:10.744 --rc geninfo_all_blocks=1 00:10:10.744 --rc geninfo_unexecuted_blocks=1 00:10:10.744 00:10:10.744 ' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.744 --rc genhtml_branch_coverage=1 00:10:10.744 --rc genhtml_function_coverage=1 00:10:10.744 --rc genhtml_legend=1 00:10:10.744 --rc geninfo_all_blocks=1 00:10:10.744 --rc geninfo_unexecuted_blocks=1 00:10:10.744 00:10:10.744 ' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.744 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.745 21:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.745 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:10.745 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:10.745 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.745 21:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.882 21:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.882 21:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.882 21:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.882 21:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.882 21:58:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.882 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:18.883 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:18.883 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:18.883 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:18.883 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:10:18.883 00:10:18.883 --- 10.0.0.2 ping statistics --- 00:10:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.883 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:10:18.883 00:10:18.883 --- 10.0.0.1 ping statistics --- 00:10:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.883 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3358530 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3358530 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3358530 ']' 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:18.883 21:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.883 [2024-10-12 21:58:36.429809] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:18.883 [2024-10-12 21:58:36.429887] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.884 [2024-10-12 21:58:36.518990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.884 [2024-10-12 21:58:36.566875] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.884 [2024-10-12 21:58:36.566925] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.884 [2024-10-12 21:58:36.566933] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.884 [2024-10-12 21:58:36.566941] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.884 [2024-10-12 21:58:36.566947] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.884 [2024-10-12 21:58:36.567099] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:18.884 [2024-10-12 21:58:36.567255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:18.884 [2024-10-12 21:58:36.567462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:18.884 [2024-10-12 21:58:36.567464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 [2024-10-12 21:58:37.295895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 Malloc0 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:18.884 [2024-10-12 21:58:37.361482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:18.884 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:18.884 { 00:10:18.884 "params": { 00:10:18.884 "name": "Nvme$subsystem", 00:10:18.884 "trtype": "$TEST_TRANSPORT", 00:10:18.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.884 "adrfam": "ipv4", 00:10:18.884 "trsvcid": "$NVMF_PORT", 00:10:18.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.884 "hdgst": ${hdgst:-false}, 00:10:18.884 "ddgst": ${ddgst:-false} 00:10:18.884 }, 00:10:18.884 "method": "bdev_nvme_attach_controller" 00:10:18.884 } 00:10:18.884 EOF 00:10:18.884 )") 00:10:19.145 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:19.145 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:19.145 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:19.145 21:58:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:19.145 "params": { 00:10:19.145 "name": "Nvme1", 00:10:19.145 "trtype": "tcp", 00:10:19.145 "traddr": "10.0.0.2", 00:10:19.145 "adrfam": "ipv4", 00:10:19.145 "trsvcid": "4420", 00:10:19.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:19.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:19.145 "hdgst": false, 00:10:19.145 "ddgst": false 00:10:19.145 }, 00:10:19.145 "method": "bdev_nvme_attach_controller" 00:10:19.145 }' 00:10:19.145 [2024-10-12 21:58:37.420052] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:19.145 [2024-10-12 21:58:37.420142] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358728 ] 00:10:19.145 [2024-10-12 21:58:37.503161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.145 [2024-10-12 21:58:37.552877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.145 [2024-10-12 21:58:37.553039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.145 [2024-10-12 21:58:37.553040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.406 I/O targets: 00:10:19.406 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:19.406 00:10:19.406 00:10:19.406 CUnit - A unit testing framework for C - Version 2.1-3 00:10:19.406 http://cunit.sourceforge.net/ 00:10:19.406 00:10:19.406 00:10:19.406 Suite: bdevio tests on: Nvme1n1 00:10:19.406 Test: blockdev write read block ...passed 00:10:19.666 Test: blockdev write zeroes read block ...passed 00:10:19.666 Test: blockdev write zeroes read no split ...passed 00:10:19.666 Test: blockdev write zeroes read split ...passed 00:10:19.666 Test: blockdev write zeroes read split partial ...passed 00:10:19.666 Test: blockdev reset ...[2024-10-12 21:58:37.986456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:19.666 [2024-10-12 21:58:37.986534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff0c50 (9): Bad file descriptor 00:10:19.666 [2024-10-12 21:58:38.001775] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:19.666 passed 00:10:19.666 Test: blockdev write read 8 blocks ...passed 00:10:19.666 Test: blockdev write read size > 128k ...passed 00:10:19.666 Test: blockdev write read invalid size ...passed 00:10:19.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:19.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:19.666 Test: blockdev write read max offset ...passed 00:10:19.666 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:19.925 Test: blockdev writev readv 8 blocks ...passed 00:10:19.925 Test: blockdev writev readv 30 x 1block ...passed 00:10:19.925 Test: blockdev writev readv block ...passed 00:10:19.925 Test: blockdev writev readv size > 128k ...passed 00:10:19.925 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:19.925 Test: blockdev comparev and writev ...[2024-10-12 21:58:38.223701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.925 [2024-10-12 21:58:38.223738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:19.925 [2024-10-12 21:58:38.223754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.925 [2024-10-12 21:58:38.223762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:19.925 [2024-10-12 21:58:38.224237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.925 [2024-10-12 21:58:38.224251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:19.925 [2024-10-12 21:58:38.224264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.925 [2024-10-12 21:58:38.224272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:19.925 [2024-10-12 21:58:38.224758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.926 [2024-10-12 21:58:38.224770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.224783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.926 [2024-10-12 21:58:38.224791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.225254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.926 [2024-10-12 21:58:38.225267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.225285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:19.926 [2024-10-12 21:58:38.225293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:19.926 passed 00:10:19.926 Test: blockdev nvme passthru rw ...passed 00:10:19.926 Test: blockdev nvme passthru vendor specific ...[2024-10-12 21:58:38.308931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.926 [2024-10-12 21:58:38.308947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.309314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.926 [2024-10-12 21:58:38.309327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.309695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.926 [2024-10-12 21:58:38.309706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:19.926 [2024-10-12 21:58:38.310075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:19.926 [2024-10-12 21:58:38.310086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:19.926 passed 00:10:19.926 Test: blockdev nvme admin passthru ...passed 00:10:19.926 Test: blockdev copy ...passed 00:10:19.926 00:10:19.926 Run Summary: Type Total Ran Passed Failed Inactive 00:10:19.926 suites 1 1 n/a 0 0 00:10:19.926 tests 23 23 23 0 0 00:10:19.926 asserts 152 152 152 0 n/a 00:10:19.926 00:10:19.926 Elapsed time = 1.103 seconds 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.186 rmmod nvme_tcp 00:10:20.186 rmmod nvme_fabrics 00:10:20.186 rmmod nvme_keyring 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3358530 ']' 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3358530 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3358530 ']' 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3358530 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3358530 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3358530' 00:10:20.186 killing process with pid 3358530 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3358530 00:10:20.186 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3358530 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.447 21:58:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.356 21:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.356 00:10:22.356 real 0m12.070s 00:10:22.356 user 0m13.095s 00:10:22.356 sys 0m6.134s 00:10:22.356 21:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.356 21:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.356 ************************************ 00:10:22.356 END TEST nvmf_bdevio 00:10:22.356 ************************************ 00:10:22.616 21:58:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:22.616 00:10:22.616 real 5m5.354s 00:10:22.616 user 11m58.878s 00:10:22.616 sys 1m51.445s 00:10:22.617 21:58:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.617 21:58:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.617 ************************************ 00:10:22.617 END TEST nvmf_target_core 00:10:22.617 ************************************ 00:10:22.617 21:58:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.617 21:58:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.617 21:58:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.617 21:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.617 ************************************ 00:10:22.617 START TEST nvmf_target_extra 00:10:22.617 ************************************ 00:10:22.617 21:58:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.617 * Looking for test storage... 00:10:22.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:22.617 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.617 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.617 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.877 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.878 --rc genhtml_branch_coverage=1 00:10:22.878 --rc genhtml_function_coverage=1 00:10:22.878 --rc genhtml_legend=1 00:10:22.878 --rc geninfo_all_blocks=1 00:10:22.878 --rc geninfo_unexecuted_blocks=1 00:10:22.878 00:10:22.878 ' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.878 --rc genhtml_branch_coverage=1 00:10:22.878 --rc genhtml_function_coverage=1 00:10:22.878 --rc genhtml_legend=1 00:10:22.878 --rc geninfo_all_blocks=1 00:10:22.878 --rc geninfo_unexecuted_blocks=1 00:10:22.878 00:10:22.878 ' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.878 --rc genhtml_branch_coverage=1 00:10:22.878 --rc genhtml_function_coverage=1 00:10:22.878 --rc genhtml_legend=1 00:10:22.878 --rc geninfo_all_blocks=1 00:10:22.878 --rc geninfo_unexecuted_blocks=1 00:10:22.878 00:10:22.878 ' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.878 --rc genhtml_branch_coverage=1 00:10:22.878 --rc genhtml_function_coverage=1 00:10:22.878 --rc genhtml_legend=1 00:10:22.878 --rc geninfo_all_blocks=1 00:10:22.878 --rc geninfo_unexecuted_blocks=1 00:10:22.878 00:10:22.878 ' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.878 ************************************ 00:10:22.878 START TEST nvmf_example 00:10:22.878 ************************************ 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:22.878 * Looking for test storage... 00:10:22.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.878 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.140 --rc genhtml_branch_coverage=1 00:10:23.140 --rc genhtml_function_coverage=1 00:10:23.140 --rc genhtml_legend=1 00:10:23.140 --rc geninfo_all_blocks=1 00:10:23.140 --rc geninfo_unexecuted_blocks=1 00:10:23.140 00:10:23.140 ' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.140 --rc genhtml_branch_coverage=1 00:10:23.140 --rc genhtml_function_coverage=1 00:10:23.140 --rc genhtml_legend=1 00:10:23.140 --rc geninfo_all_blocks=1 00:10:23.140 --rc geninfo_unexecuted_blocks=1 00:10:23.140 00:10:23.140 ' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:23.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.140 --rc genhtml_branch_coverage=1 00:10:23.140 --rc genhtml_function_coverage=1 00:10:23.140 --rc genhtml_legend=1 00:10:23.140 --rc geninfo_all_blocks=1 00:10:23.140 --rc geninfo_unexecuted_blocks=1 00:10:23.140 00:10:23.140 ' 00:10:23.140 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:23.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.141 --rc genhtml_branch_coverage=1 00:10:23.141 --rc genhtml_function_coverage=1 00:10:23.141 --rc genhtml_legend=1 00:10:23.141 --rc geninfo_all_blocks=1 00:10:23.141 --rc geninfo_unexecuted_blocks=1 00:10:23.141 00:10:23.141 ' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.141 21:58:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:31.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:31.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:31.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:31.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:10:31.281 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:10:31.282 00:10:31.282 --- 10.0.0.2 ping statistics --- 00:10:31.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.282 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:10:31.282 00:10:31.282 --- 10.0.0.1 ping statistics --- 00:10:31.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.282 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3363291 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3363291 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3363291 ']' 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.282 21:58:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.543 21:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:31.544 21:58:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.544 21:58:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:31.544 21:58:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:43.766 Initializing NVMe Controllers 00:10:43.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.766 Initialization complete. Launching workers. 00:10:43.766 ======================================================== 00:10:43.766 Latency(us) 00:10:43.766 Device Information : IOPS MiB/s Average min max 00:10:43.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19482.47 76.10 3284.69 629.80 16009.26 00:10:43.766 ======================================================== 00:10:43.766 Total : 19482.47 76.10 3284.69 629.80 16009.26 00:10:43.766 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.766 rmmod nvme_tcp 00:10:43.766 rmmod nvme_fabrics 00:10:43.766 rmmod nvme_keyring 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 3363291 ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 3363291 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3363291 ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3363291 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3363291 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3363291' 00:10:43.766 killing process with pid 3363291 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3363291 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3363291 00:10:43.766 nvmf threads initialize successfully 00:10:43.766 bdev subsystem init successfully 00:10:43.766 created a nvmf target service 00:10:43.766 create targets's poll groups done 00:10:43.766 all subsystems of target started 00:10:43.766 nvmf target is running 00:10:43.766 all subsystems of target stopped 00:10:43.766 destroy targets's poll groups done 00:10:43.766 destroyed the nvmf target service 00:10:43.766 bdev subsystem finish successfully 00:10:43.766 nvmf threads destroy successfully 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.766 21:59:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.026 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.026 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:44.026 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.026 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 00:10:44.284 real 0m21.312s 00:10:44.284 user 0m46.381s 00:10:44.284 sys 0m6.945s 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 ************************************ 00:10:44.284 END TEST nvmf_example 00:10:44.284 ************************************ 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.284 ************************************ 00:10:44.284 START TEST nvmf_filesystem 00:10:44.284 ************************************ 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:44.284 * Looking for test storage... 00:10:44.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:44.284 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:44.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.548 --rc genhtml_branch_coverage=1 00:10:44.548 --rc genhtml_function_coverage=1 00:10:44.548 --rc genhtml_legend=1 00:10:44.548 --rc geninfo_all_blocks=1 00:10:44.548 --rc geninfo_unexecuted_blocks=1 00:10:44.548 00:10:44.548 ' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:44.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.548 --rc genhtml_branch_coverage=1 00:10:44.548 --rc genhtml_function_coverage=1 00:10:44.548 --rc genhtml_legend=1 00:10:44.548 --rc geninfo_all_blocks=1 00:10:44.548 --rc geninfo_unexecuted_blocks=1 00:10:44.548 00:10:44.548 ' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:44.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.548 --rc genhtml_branch_coverage=1 00:10:44.548 --rc genhtml_function_coverage=1 00:10:44.548 --rc genhtml_legend=1 00:10:44.548 --rc geninfo_all_blocks=1 00:10:44.548 --rc geninfo_unexecuted_blocks=1 00:10:44.548 00:10:44.548 ' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:44.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.548 --rc genhtml_branch_coverage=1 00:10:44.548 --rc genhtml_function_coverage=1 00:10:44.548 --rc genhtml_legend=1 00:10:44.548 --rc geninfo_all_blocks=1 00:10:44.548 --rc geninfo_unexecuted_blocks=1 00:10:44.548 00:10:44.548 ' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:44.548 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:44.549 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:44.549 #define SPDK_CONFIG_H 00:10:44.549 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:44.549 #define SPDK_CONFIG_APPS 1 00:10:44.549 #define SPDK_CONFIG_ARCH native 00:10:44.549 #undef SPDK_CONFIG_ASAN 00:10:44.549 #undef SPDK_CONFIG_AVAHI 00:10:44.549 #undef SPDK_CONFIG_CET 00:10:44.549 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:44.549 #define SPDK_CONFIG_COVERAGE 1 00:10:44.549 #define SPDK_CONFIG_CROSS_PREFIX 00:10:44.549 #undef SPDK_CONFIG_CRYPTO 00:10:44.549 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:44.549 #undef SPDK_CONFIG_CUSTOMOCF 00:10:44.549 #undef SPDK_CONFIG_DAOS 00:10:44.549 #define SPDK_CONFIG_DAOS_DIR 00:10:44.549 #define SPDK_CONFIG_DEBUG 1 00:10:44.549 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:44.549 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:44.549 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:44.549 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:44.549 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:44.549 #undef SPDK_CONFIG_DPDK_UADK 00:10:44.549 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:44.549 #define SPDK_CONFIG_EXAMPLES 1 00:10:44.549 #undef SPDK_CONFIG_FC 00:10:44.549 #define SPDK_CONFIG_FC_PATH 00:10:44.549 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:44.549 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:44.549 #define SPDK_CONFIG_FSDEV 1 00:10:44.549 #undef SPDK_CONFIG_FUSE 00:10:44.549 #undef SPDK_CONFIG_FUZZER 00:10:44.549 #define SPDK_CONFIG_FUZZER_LIB 00:10:44.549 #undef SPDK_CONFIG_GOLANG 00:10:44.549 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:44.549 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:44.549 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:44.549 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:44.549 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:44.549 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:44.549 #undef SPDK_CONFIG_HAVE_LZ4 00:10:44.550 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:44.550 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:44.550 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:44.550 #define SPDK_CONFIG_IDXD 1 00:10:44.550 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:44.550 #undef SPDK_CONFIG_IPSEC_MB 00:10:44.550 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:44.550 #define SPDK_CONFIG_ISAL 1 00:10:44.550 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:44.550 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:44.550 #define SPDK_CONFIG_LIBDIR 00:10:44.550 #undef SPDK_CONFIG_LTO 00:10:44.550 #define SPDK_CONFIG_MAX_LCORES 128 00:10:44.550 #define SPDK_CONFIG_NVME_CUSE 1 00:10:44.550 #undef SPDK_CONFIG_OCF 00:10:44.550 #define SPDK_CONFIG_OCF_PATH 00:10:44.550 #define SPDK_CONFIG_OPENSSL_PATH 00:10:44.550 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:44.550 #define SPDK_CONFIG_PGO_DIR 00:10:44.550 #undef SPDK_CONFIG_PGO_USE 00:10:44.550 #define SPDK_CONFIG_PREFIX /usr/local 00:10:44.550 #undef SPDK_CONFIG_RAID5F 00:10:44.550 #undef SPDK_CONFIG_RBD 00:10:44.550 #define SPDK_CONFIG_RDMA 1 00:10:44.550 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:44.550 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:44.550 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:44.550 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:44.550 #define SPDK_CONFIG_SHARED 1 00:10:44.550 #undef SPDK_CONFIG_SMA 00:10:44.550 #define SPDK_CONFIG_TESTS 1 00:10:44.550 #undef SPDK_CONFIG_TSAN 00:10:44.550 #define SPDK_CONFIG_UBLK 1 00:10:44.550 #define SPDK_CONFIG_UBSAN 1 00:10:44.550 #undef SPDK_CONFIG_UNIT_TESTS 00:10:44.550 #undef SPDK_CONFIG_URING 00:10:44.550 #define SPDK_CONFIG_URING_PATH 00:10:44.550 #undef SPDK_CONFIG_URING_ZNS 00:10:44.550 #undef SPDK_CONFIG_USDT 00:10:44.550 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:44.550 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:44.550 #define SPDK_CONFIG_VFIO_USER 1 00:10:44.550 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:44.550 #define SPDK_CONFIG_VHOST 1 00:10:44.550 #define SPDK_CONFIG_VIRTIO 1 00:10:44.550 #undef SPDK_CONFIG_VTUNE 00:10:44.550 #define SPDK_CONFIG_VTUNE_DIR 00:10:44.550 #define SPDK_CONFIG_WERROR 1 00:10:44.550 #define SPDK_CONFIG_WPDK_DIR 00:10:44.550 #undef SPDK_CONFIG_XNVME 00:10:44.550 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:44.550 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:44.551 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3366088 ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3366088 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.HLvXv2 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HLvXv2/tests/target /tmp/spdk.HLvXv2 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:44.552 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=607141888 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677287936 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=121410187264 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356558336 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7946371072 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668246016 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847951360 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677888000 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=393216 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:44.553 * Looking for test storage... 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=121410187264 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10160963584 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:44.553 21:59:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.814 --rc genhtml_branch_coverage=1 00:10:44.814 --rc genhtml_function_coverage=1 00:10:44.814 --rc genhtml_legend=1 00:10:44.814 --rc geninfo_all_blocks=1 00:10:44.814 --rc geninfo_unexecuted_blocks=1 00:10:44.814 00:10:44.814 ' 00:10:44.814 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:44.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.814 --rc genhtml_branch_coverage=1 00:10:44.814 --rc genhtml_function_coverage=1 00:10:44.814 --rc genhtml_legend=1 00:10:44.814 --rc geninfo_all_blocks=1 00:10:44.815 --rc geninfo_unexecuted_blocks=1 00:10:44.815 00:10:44.815 ' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.815 --rc genhtml_branch_coverage=1 00:10:44.815 --rc genhtml_function_coverage=1 00:10:44.815 --rc genhtml_legend=1 00:10:44.815 --rc geninfo_all_blocks=1 00:10:44.815 --rc geninfo_unexecuted_blocks=1 00:10:44.815 00:10:44.815 ' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:44.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.815 --rc genhtml_branch_coverage=1 00:10:44.815 --rc genhtml_function_coverage=1 00:10:44.815 --rc genhtml_legend=1 00:10:44.815 --rc geninfo_all_blocks=1 00:10:44.815 --rc geninfo_unexecuted_blocks=1 00:10:44.815 00:10:44.815 ' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.815 21:59:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.951 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:52.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:52.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:52.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:52.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:10:52.952 00:10:52.952 --- 10.0.0.2 ping statistics --- 00:10:52.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.952 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:10:52.952 00:10:52.952 --- 10.0.0.1 ping statistics --- 00:10:52.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.952 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.952 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.953 ************************************ 00:10:52.953 START TEST nvmf_filesystem_no_in_capsule 00:10:52.953 ************************************ 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3370030 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3370030 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3370030 ']' 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.953 21:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.953 [2024-10-12 21:59:10.777951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:52.953 [2024-10-12 21:59:10.778012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.953 [2024-10-12 21:59:10.867467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.953 [2024-10-12 21:59:10.916898] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.953 [2024-10-12 21:59:10.916957] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.953 [2024-10-12 21:59:10.916965] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.953 [2024-10-12 21:59:10.916972] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.953 [2024-10-12 21:59:10.916979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.953 [2024-10-12 21:59:10.917150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.953 [2024-10-12 21:59:10.917238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.953 [2024-10-12 21:59:10.917394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.953 [2024-10-12 21:59:10.917396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.215 [2024-10-12 21:59:11.660755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.215 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 Malloc1 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 [2024-10-12 21:59:11.820968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.476 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:53.477 { 00:10:53.477 "name": "Malloc1", 00:10:53.477 "aliases": [ 00:10:53.477 "14b47f3e-99ec-47c7-84d8-f23b87cb68d3" 00:10:53.477 ], 00:10:53.477 "product_name": "Malloc disk", 00:10:53.477 "block_size": 512, 00:10:53.477 "num_blocks": 1048576, 00:10:53.477 "uuid": "14b47f3e-99ec-47c7-84d8-f23b87cb68d3", 00:10:53.477 "assigned_rate_limits": { 00:10:53.477 "rw_ios_per_sec": 0, 00:10:53.477 "rw_mbytes_per_sec": 0, 00:10:53.477 "r_mbytes_per_sec": 0, 00:10:53.477 "w_mbytes_per_sec": 0 00:10:53.477 }, 00:10:53.477 "claimed": true, 00:10:53.477 "claim_type": "exclusive_write", 00:10:53.477 "zoned": false, 00:10:53.477 "supported_io_types": { 00:10:53.477 "read": true, 00:10:53.477 "write": true, 00:10:53.477 "unmap": true, 00:10:53.477 "flush": true, 00:10:53.477 "reset": true, 00:10:53.477 "nvme_admin": false, 00:10:53.477 "nvme_io": false, 00:10:53.477 "nvme_io_md": false, 00:10:53.477 "write_zeroes": true, 00:10:53.477 "zcopy": true, 00:10:53.477 "get_zone_info": false, 00:10:53.477 "zone_management": false, 00:10:53.477 "zone_append": false, 00:10:53.477 "compare": false, 00:10:53.477 "compare_and_write": false, 00:10:53.477 "abort": true, 00:10:53.477 "seek_hole": false, 00:10:53.477 "seek_data": false, 00:10:53.477 "copy": true, 00:10:53.477 "nvme_iov_md": false 00:10:53.477 }, 00:10:53.477 "memory_domains": [ 00:10:53.477 { 00:10:53.477 "dma_device_id": "system", 00:10:53.477 "dma_device_type": 1 00:10:53.477 }, 00:10:53.477 { 00:10:53.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.477 "dma_device_type": 2 00:10:53.477 } 00:10:53.477 ], 00:10:53.477 "driver_specific": {} 00:10:53.477 } 00:10:53.477 ]' 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:53.477 21:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.387 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.387 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.387 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.387 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.387 21:59:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:57.296 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:57.556 21:59:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:57.816 21:59:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.197 ************************************ 00:10:59.197 START TEST filesystem_ext4 00:10:59.197 ************************************ 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:59.197 mke2fs 1.47.0 (5-Feb-2023) 00:10:59.197 Discarding device blocks: 0/522240 done 00:10:59.197 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:59.197 Filesystem UUID: cd2ccec5-3c68-4e00-9a9e-2bec7b09ebbb 00:10:59.197 Superblock backups stored on blocks: 00:10:59.197 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:59.197 00:10:59.197 Allocating group tables: 0/64 done 00:10:59.197 Writing inode tables: 0/64 done 00:10:59.197 Creating journal (8192 blocks): done 00:10:59.197 Writing superblocks and filesystem accounting information: 0/64 done 00:10:59.197 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:59.197 21:59:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3370030 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:05.772 00:11:05.772 real 0m6.492s 00:11:05.772 user 0m0.035s 00:11:05.772 sys 0m0.067s 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:05.772 ************************************ 00:11:05.772 END TEST filesystem_ext4 00:11:05.772 ************************************ 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.772 ************************************ 00:11:05.772 START TEST filesystem_btrfs 00:11:05.772 ************************************ 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:05.772 21:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:05.772 btrfs-progs v6.8.1 00:11:05.772 See https://btrfs.readthedocs.io for more information. 00:11:05.772 00:11:05.772 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:05.772 NOTE: several default settings have changed in version 5.15, please make sure 00:11:05.772 this does not affect your deployments: 00:11:05.772 - DUP for metadata (-m dup) 00:11:05.772 - enabled no-holes (-O no-holes) 00:11:05.772 - enabled free-space-tree (-R free-space-tree) 00:11:05.772 00:11:05.772 Label: (null) 00:11:05.772 UUID: 5bc9ba16-3097-4a5d-b7ae-87b663cad38f 00:11:05.772 Node size: 16384 00:11:05.772 Sector size: 4096 (CPU page size: 4096) 00:11:05.772 Filesystem size: 510.00MiB 00:11:05.772 Block group profiles: 00:11:05.772 Data: single 8.00MiB 00:11:05.772 Metadata: DUP 32.00MiB 00:11:05.772 System: DUP 8.00MiB 00:11:05.772 SSD detected: yes 00:11:05.772 Zoned device: no 00:11:05.772 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:05.772 Checksum: crc32c 00:11:05.772 Number of devices: 1 00:11:05.772 Devices: 00:11:05.772 ID SIZE PATH 00:11:05.772 1 510.00MiB /dev/nvme0n1p1 00:11:05.772 00:11:05.772 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.772 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:06.033 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3370030 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.294 00:11:06.294 real 0m0.657s 00:11:06.294 user 0m0.032s 00:11:06.294 sys 0m0.112s 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.294 ************************************ 00:11:06.294 END TEST filesystem_btrfs 00:11:06.294 ************************************ 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.294 ************************************ 00:11:06.294 START TEST filesystem_xfs 00:11:06.294 ************************************ 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:06.294 21:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:07.233 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:07.233 = sectsz=512 attr=2, projid32bit=1 00:11:07.233 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.233 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.233 data = bsize=4096 blocks=130560, imaxpct=25 00:11:07.233 = sunit=0 swidth=0 blks 00:11:07.233 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.233 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.233 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:07.233 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:08.173 Discarding blocks...Done. 00:11:08.173 21:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:08.173 21:59:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3370030 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.716 00:11:10.716 real 0m4.297s 00:11:10.716 user 0m0.027s 00:11:10.716 sys 0m0.081s 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.716 ************************************ 00:11:10.716 END TEST filesystem_xfs 00:11:10.716 ************************************ 00:11:10.716 21:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.716 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.716 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.716 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.716 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.717 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3370030 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3370030 ']' 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3370030 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3370030 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3370030' 00:11:10.978 killing process with pid 3370030 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3370030 00:11:10.978 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3370030 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:11.239 00:11:11.239 real 0m18.771s 00:11:11.239 user 1m14.144s 00:11:11.239 sys 0m1.426s 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 ************************************ 00:11:11.239 END TEST nvmf_filesystem_no_in_capsule 00:11:11.239 ************************************ 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 ************************************ 00:11:11.239 START TEST nvmf_filesystem_in_capsule 00:11:11.239 ************************************ 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3373959 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3373959 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3373959 ']' 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.239 21:59:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.240 [2024-10-12 21:59:29.625724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:11.240 [2024-10-12 21:59:29.625784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.240 [2024-10-12 21:59:29.712495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.501 [2024-10-12 21:59:29.746611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.501 [2024-10-12 21:59:29.746646] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.501 [2024-10-12 21:59:29.746652] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.501 [2024-10-12 21:59:29.746656] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.501 [2024-10-12 21:59:29.746660] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.501 [2024-10-12 21:59:29.746805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.501 [2024-10-12 21:59:29.746958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.501 [2024-10-12 21:59:29.747112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.501 [2024-10-12 21:59:29.747129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.071 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.071 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.072 [2024-10-12 21:59:30.479488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.072 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 Malloc1 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 [2024-10-12 21:59:30.608447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.350 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:12.350 { 00:11:12.350 "name": "Malloc1", 00:11:12.350 "aliases": [ 00:11:12.350 "84f3cf2a-270e-47cd-9860-54df515dbb12" 00:11:12.350 ], 00:11:12.350 "product_name": "Malloc disk", 00:11:12.350 "block_size": 512, 00:11:12.351 "num_blocks": 1048576, 00:11:12.351 "uuid": "84f3cf2a-270e-47cd-9860-54df515dbb12", 00:11:12.351 "assigned_rate_limits": { 00:11:12.351 "rw_ios_per_sec": 0, 00:11:12.351 "rw_mbytes_per_sec": 0, 00:11:12.351 "r_mbytes_per_sec": 0, 00:11:12.351 "w_mbytes_per_sec": 0 00:11:12.351 }, 00:11:12.351 "claimed": true, 00:11:12.351 "claim_type": "exclusive_write", 00:11:12.351 "zoned": false, 00:11:12.351 "supported_io_types": { 00:11:12.351 "read": true, 00:11:12.351 "write": true, 00:11:12.351 "unmap": true, 00:11:12.351 "flush": true, 00:11:12.351 "reset": true, 00:11:12.351 "nvme_admin": false, 00:11:12.351 "nvme_io": false, 00:11:12.351 "nvme_io_md": false, 00:11:12.351 "write_zeroes": true, 00:11:12.351 "zcopy": true, 00:11:12.351 "get_zone_info": false, 00:11:12.351 "zone_management": false, 00:11:12.351 "zone_append": false, 00:11:12.351 "compare": false, 00:11:12.351 "compare_and_write": false, 00:11:12.351 "abort": true, 00:11:12.351 "seek_hole": false, 00:11:12.351 "seek_data": false, 00:11:12.351 "copy": true, 00:11:12.351 "nvme_iov_md": false 00:11:12.351 }, 00:11:12.351 "memory_domains": [ 00:11:12.351 { 00:11:12.351 "dma_device_id": "system", 00:11:12.351 "dma_device_type": 1 00:11:12.351 }, 00:11:12.351 { 00:11:12.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.351 "dma_device_type": 2 00:11:12.351 } 00:11:12.351 ], 00:11:12.351 "driver_specific": {} 00:11:12.351 } 00:11:12.351 ]' 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.351 21:59:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.863 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.863 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.863 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.863 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.863 21:59:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:16.406 21:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.789 ************************************ 00:11:17.789 START TEST filesystem_in_capsule_ext4 00:11:17.789 ************************************ 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:17.789 21:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:17.789 mke2fs 1.47.0 (5-Feb-2023) 00:11:17.789 Discarding device blocks: 0/522240 done 00:11:17.789 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:17.789 Filesystem UUID: ab6c6d46-4a03-4840-8dde-8b52696ba44e 00:11:17.789 Superblock backups stored on blocks: 00:11:17.789 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:17.789 00:11:17.789 Allocating group tables: 0/64 done 00:11:17.789 Writing inode tables: 0/64 done 00:11:17.789 Creating journal (8192 blocks): done 00:11:20.263 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:20.263 00:11:20.263 21:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:20.263 21:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.546 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3373959 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.806 00:11:25.806 real 0m8.179s 00:11:25.806 user 0m0.030s 00:11:25.806 sys 0m0.080s 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:25.806 ************************************ 00:11:25.806 END TEST filesystem_in_capsule_ext4 00:11:25.806 ************************************ 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.806 ************************************ 00:11:25.806 START TEST filesystem_in_capsule_btrfs 00:11:25.806 ************************************ 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:25.806 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:26.377 btrfs-progs v6.8.1 00:11:26.377 See https://btrfs.readthedocs.io for more information. 00:11:26.377 00:11:26.377 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:26.377 NOTE: several default settings have changed in version 5.15, please make sure 00:11:26.377 this does not affect your deployments: 00:11:26.377 - DUP for metadata (-m dup) 00:11:26.377 - enabled no-holes (-O no-holes) 00:11:26.377 - enabled free-space-tree (-R free-space-tree) 00:11:26.377 00:11:26.377 Label: (null) 00:11:26.377 UUID: 79aaa8d5-3483-4ba0-a53f-9053bac0bad0 00:11:26.377 Node size: 16384 00:11:26.377 Sector size: 4096 (CPU page size: 4096) 00:11:26.377 Filesystem size: 510.00MiB 00:11:26.377 Block group profiles: 00:11:26.377 Data: single 8.00MiB 00:11:26.377 Metadata: DUP 32.00MiB 00:11:26.377 System: DUP 8.00MiB 00:11:26.377 SSD detected: yes 00:11:26.377 Zoned device: no 00:11:26.377 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:26.377 Checksum: crc32c 00:11:26.377 Number of devices: 1 00:11:26.377 Devices: 00:11:26.377 ID SIZE PATH 00:11:26.377 1 510.00MiB /dev/nvme0n1p1 00:11:26.377 00:11:26.377 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:26.377 21:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3373959 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.318 00:11:27.318 real 0m1.412s 00:11:27.318 user 0m0.026s 00:11:27.318 sys 0m0.121s 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.318 ************************************ 00:11:27.318 END TEST filesystem_in_capsule_btrfs 00:11:27.318 ************************************ 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.318 ************************************ 00:11:27.318 START TEST filesystem_in_capsule_xfs 00:11:27.318 ************************************ 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:27.318 21:59:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.318 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.318 = sectsz=512 attr=2, projid32bit=1 00:11:27.318 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.318 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.318 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.318 = sunit=0 swidth=0 blks 00:11:27.318 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.318 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.318 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.318 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.259 Discarding blocks...Done. 00:11:28.259 21:59:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:28.259 21:59:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3373959 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.807 21:59:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.807 00:11:30.807 real 0m3.318s 00:11:30.807 user 0m0.029s 00:11:30.807 sys 0m0.077s 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.807 ************************************ 00:11:30.807 END TEST filesystem_in_capsule_xfs 00:11:30.807 ************************************ 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.807 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3373959 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3373959 ']' 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3373959 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3373959 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3373959' 00:11:31.067 killing process with pid 3373959 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3373959 00:11:31.067 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3373959 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.327 00:11:31.327 real 0m20.007s 00:11:31.327 user 1m19.131s 00:11:31.327 sys 0m1.455s 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.327 ************************************ 00:11:31.327 END TEST nvmf_filesystem_in_capsule 00:11:31.327 ************************************ 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.327 rmmod nvme_tcp 00:11:31.327 rmmod nvme_fabrics 00:11:31.327 rmmod nvme_keyring 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.327 21:59:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.872 00:11:33.872 real 0m49.151s 00:11:33.872 user 2m35.723s 00:11:33.872 sys 0m8.761s 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.872 ************************************ 00:11:33.872 END TEST nvmf_filesystem 00:11:33.872 ************************************ 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.872 21:59:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.872 ************************************ 00:11:33.873 START TEST nvmf_target_discovery 00:11:33.873 ************************************ 00:11:33.873 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.873 * Looking for test storage... 00:11:33.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.873 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:33.873 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:11:33.873 21:59:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:33.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.873 --rc genhtml_branch_coverage=1 00:11:33.873 --rc genhtml_function_coverage=1 00:11:33.873 --rc genhtml_legend=1 00:11:33.873 --rc geninfo_all_blocks=1 00:11:33.873 --rc geninfo_unexecuted_blocks=1 00:11:33.873 00:11:33.873 ' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:33.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.873 --rc genhtml_branch_coverage=1 00:11:33.873 --rc genhtml_function_coverage=1 00:11:33.873 --rc genhtml_legend=1 00:11:33.873 --rc geninfo_all_blocks=1 00:11:33.873 --rc geninfo_unexecuted_blocks=1 00:11:33.873 00:11:33.873 ' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:33.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.873 --rc genhtml_branch_coverage=1 00:11:33.873 --rc genhtml_function_coverage=1 00:11:33.873 --rc genhtml_legend=1 00:11:33.873 --rc geninfo_all_blocks=1 00:11:33.873 --rc geninfo_unexecuted_blocks=1 00:11:33.873 00:11:33.873 ' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:33.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.873 --rc genhtml_branch_coverage=1 00:11:33.873 --rc genhtml_function_coverage=1 00:11:33.873 --rc genhtml_legend=1 00:11:33.873 --rc geninfo_all_blocks=1 00:11:33.873 --rc geninfo_unexecuted_blocks=1 00:11:33.873 00:11:33.873 ' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.873 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.874 21:59:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.014 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.014 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.014 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:11:42.015 00:11:42.015 --- 10.0.0.2 ping statistics --- 00:11:42.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.015 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:11:42.015 00:11:42.015 --- 10.0.0.1 ping statistics --- 00:11:42.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.015 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3382225 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3382225 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3382225 ']' 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.015 21:59:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.015 [2024-10-12 21:59:59.629744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:42.015 [2024-10-12 21:59:59.629810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.015 [2024-10-12 21:59:59.719094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.015 [2024-10-12 21:59:59.767154] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.015 [2024-10-12 21:59:59.767205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.015 [2024-10-12 21:59:59.767214] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.015 [2024-10-12 21:59:59.767221] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.015 [2024-10-12 21:59:59.767226] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.015 [2024-10-12 21:59:59.767374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.015 [2024-10-12 21:59:59.767531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.015 [2024-10-12 21:59:59.767686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.015 [2024-10-12 21:59:59.767687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.015 [2024-10-12 22:00:00.491116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.015 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 Null1 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 [2024-10-12 22:00:00.551563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 Null2 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.277 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 Null3 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 Null4 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.278 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:42.540 00:11:42.540 Discovery Log Number of Records 6, Generation counter 6 00:11:42.540 =====Discovery Log Entry 0====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: current discovery subsystem 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4420 00:11:42.540 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: explicit discovery connections, duplicate discovery information 00:11:42.540 sectype: none 00:11:42.540 =====Discovery Log Entry 1====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: nvme subsystem 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4420 00:11:42.540 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: none 00:11:42.540 sectype: none 00:11:42.540 =====Discovery Log Entry 2====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: nvme subsystem 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4420 00:11:42.540 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: none 00:11:42.540 sectype: none 00:11:42.540 =====Discovery Log Entry 3====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: nvme subsystem 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4420 00:11:42.540 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: none 00:11:42.540 sectype: none 00:11:42.540 =====Discovery Log Entry 4====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: nvme subsystem 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4420 00:11:42.540 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: none 00:11:42.540 sectype: none 00:11:42.540 =====Discovery Log Entry 5====== 00:11:42.540 trtype: tcp 00:11:42.540 adrfam: ipv4 00:11:42.540 subtype: discovery subsystem referral 00:11:42.540 treq: not required 00:11:42.540 portid: 0 00:11:42.540 trsvcid: 4430 00:11:42.540 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.540 traddr: 10.0.0.2 00:11:42.540 eflags: none 00:11:42.540 sectype: none 00:11:42.540 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:42.540 Perform nvmf subsystem discovery via RPC 00:11:42.540 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:42.540 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.540 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.540 [ 00:11:42.540 { 00:11:42.540 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.540 "subtype": "Discovery", 00:11:42.540 "listen_addresses": [ 00:11:42.540 { 00:11:42.540 "trtype": "TCP", 00:11:42.540 "adrfam": "IPv4", 00:11:42.540 "traddr": "10.0.0.2", 00:11:42.540 "trsvcid": "4420" 00:11:42.540 } 00:11:42.540 ], 00:11:42.540 "allow_any_host": true, 00:11:42.540 "hosts": [] 00:11:42.540 }, 00:11:42.540 { 00:11:42.540 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.540 "subtype": "NVMe", 00:11:42.540 "listen_addresses": [ 00:11:42.540 { 00:11:42.540 "trtype": "TCP", 00:11:42.540 "adrfam": "IPv4", 00:11:42.540 "traddr": "10.0.0.2", 00:11:42.540 "trsvcid": "4420" 00:11:42.540 } 00:11:42.540 ], 00:11:42.540 "allow_any_host": true, 00:11:42.540 "hosts": [], 00:11:42.540 "serial_number": "SPDK00000000000001", 00:11:42.540 "model_number": "SPDK bdev Controller", 00:11:42.540 "max_namespaces": 32, 00:11:42.540 "min_cntlid": 1, 00:11:42.540 "max_cntlid": 65519, 00:11:42.540 "namespaces": [ 00:11:42.540 { 00:11:42.540 "nsid": 1, 00:11:42.540 "bdev_name": "Null1", 00:11:42.540 "name": "Null1", 00:11:42.540 "nguid": "D0CA12394A6D403AB96B221D6A49FFC2", 00:11:42.541 "uuid": "d0ca1239-4a6d-403a-b96b-221d6a49ffc2" 00:11:42.541 } 00:11:42.541 ] 00:11:42.541 }, 00:11:42.541 { 00:11:42.541 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.541 "subtype": "NVMe", 00:11:42.541 "listen_addresses": [ 00:11:42.541 { 00:11:42.541 "trtype": "TCP", 00:11:42.541 "adrfam": "IPv4", 00:11:42.541 "traddr": "10.0.0.2", 00:11:42.541 "trsvcid": "4420" 00:11:42.541 } 00:11:42.541 ], 00:11:42.541 "allow_any_host": true, 00:11:42.541 "hosts": [], 00:11:42.541 "serial_number": "SPDK00000000000002", 00:11:42.541 "model_number": "SPDK bdev Controller", 00:11:42.541 "max_namespaces": 32, 00:11:42.541 "min_cntlid": 1, 00:11:42.541 "max_cntlid": 65519, 00:11:42.541 "namespaces": [ 00:11:42.541 { 00:11:42.541 "nsid": 1, 00:11:42.541 "bdev_name": "Null2", 00:11:42.541 "name": "Null2", 00:11:42.541 "nguid": "CC504494AB064BD5B37F511E4DC63F3E", 00:11:42.541 "uuid": "cc504494-ab06-4bd5-b37f-511e4dc63f3e" 00:11:42.541 } 00:11:42.541 ] 00:11:42.541 }, 00:11:42.541 { 00:11:42.541 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:42.541 "subtype": "NVMe", 00:11:42.541 "listen_addresses": [ 00:11:42.541 { 00:11:42.541 "trtype": "TCP", 00:11:42.541 "adrfam": "IPv4", 00:11:42.541 "traddr": "10.0.0.2", 00:11:42.541 "trsvcid": "4420" 00:11:42.541 } 00:11:42.541 ], 00:11:42.541 "allow_any_host": true, 00:11:42.541 "hosts": [], 00:11:42.541 "serial_number": "SPDK00000000000003", 00:11:42.541 "model_number": "SPDK bdev Controller", 00:11:42.541 "max_namespaces": 32, 00:11:42.541 "min_cntlid": 1, 00:11:42.541 "max_cntlid": 65519, 00:11:42.541 "namespaces": [ 00:11:42.541 { 00:11:42.541 "nsid": 1, 00:11:42.541 "bdev_name": "Null3", 00:11:42.541 "name": "Null3", 00:11:42.541 "nguid": "4F7DA4A032614D66AB9BF93B2EB4A9DA", 00:11:42.541 "uuid": "4f7da4a0-3261-4d66-ab9b-f93b2eb4a9da" 00:11:42.541 } 00:11:42.541 ] 00:11:42.541 }, 00:11:42.541 { 00:11:42.541 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:42.541 "subtype": "NVMe", 00:11:42.541 "listen_addresses": [ 00:11:42.541 { 00:11:42.541 "trtype": "TCP", 00:11:42.541 "adrfam": "IPv4", 00:11:42.541 "traddr": "10.0.0.2", 00:11:42.541 "trsvcid": "4420" 00:11:42.541 } 00:11:42.541 ], 00:11:42.541 "allow_any_host": true, 00:11:42.541 "hosts": [], 00:11:42.541 "serial_number": "SPDK00000000000004", 00:11:42.541 "model_number": "SPDK bdev Controller", 00:11:42.541 "max_namespaces": 32, 00:11:42.541 "min_cntlid": 1, 00:11:42.541 "max_cntlid": 65519, 00:11:42.541 "namespaces": [ 00:11:42.541 { 00:11:42.541 "nsid": 1, 00:11:42.541 "bdev_name": "Null4", 00:11:42.541 "name": "Null4", 00:11:42.541 "nguid": "40B11EBDDD2F4B08AD048A1433E036CB", 00:11:42.541 "uuid": "40b11ebd-dd2f-4b08-ad04-8a1433e036cb" 00:11:42.541 } 00:11:42.541 ] 00:11:42.541 } 00:11:42.541 ] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.541 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.803 rmmod nvme_tcp 00:11:42.803 rmmod nvme_fabrics 00:11:42.803 rmmod nvme_keyring 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3382225 ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3382225 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3382225 ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3382225 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3382225 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3382225' 00:11:42.803 killing process with pid 3382225 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3382225 00:11:42.803 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3382225 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.064 22:00:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.608 00:11:45.608 real 0m11.645s 00:11:45.608 user 0m8.907s 00:11:45.608 sys 0m6.069s 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.608 ************************************ 00:11:45.608 END TEST nvmf_target_discovery 00:11:45.608 ************************************ 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.608 ************************************ 00:11:45.608 START TEST nvmf_referrals 00:11:45.608 ************************************ 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.608 * Looking for test storage... 00:11:45.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.608 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.609 --rc genhtml_branch_coverage=1 00:11:45.609 --rc genhtml_function_coverage=1 00:11:45.609 --rc genhtml_legend=1 00:11:45.609 --rc geninfo_all_blocks=1 00:11:45.609 --rc geninfo_unexecuted_blocks=1 00:11:45.609 00:11:45.609 ' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.609 --rc genhtml_branch_coverage=1 00:11:45.609 --rc genhtml_function_coverage=1 00:11:45.609 --rc genhtml_legend=1 00:11:45.609 --rc geninfo_all_blocks=1 00:11:45.609 --rc geninfo_unexecuted_blocks=1 00:11:45.609 00:11:45.609 ' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.609 --rc genhtml_branch_coverage=1 00:11:45.609 --rc genhtml_function_coverage=1 00:11:45.609 --rc genhtml_legend=1 00:11:45.609 --rc geninfo_all_blocks=1 00:11:45.609 --rc geninfo_unexecuted_blocks=1 00:11:45.609 00:11:45.609 ' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.609 --rc genhtml_branch_coverage=1 00:11:45.609 --rc genhtml_function_coverage=1 00:11:45.609 --rc genhtml_legend=1 00:11:45.609 --rc geninfo_all_blocks=1 00:11:45.609 --rc geninfo_unexecuted_blocks=1 00:11:45.609 00:11:45.609 ' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.609 22:00:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:53.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:53.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:53.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:53.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.746 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:11:53.747 00:11:53.747 --- 10.0.0.2 ping statistics --- 00:11:53.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.747 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:11:53.747 00:11:53.747 --- 10.0.0.1 ping statistics --- 00:11:53.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.747 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3387428 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3387428 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3387428 ']' 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.747 22:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.747 [2024-10-12 22:00:11.497960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:53.747 [2024-10-12 22:00:11.498028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.747 [2024-10-12 22:00:11.592494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.747 [2024-10-12 22:00:11.640118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.747 [2024-10-12 22:00:11.640170] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.747 [2024-10-12 22:00:11.640178] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.747 [2024-10-12 22:00:11.640186] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.747 [2024-10-12 22:00:11.640192] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.747 [2024-10-12 22:00:11.640372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.747 [2024-10-12 22:00:11.640526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.747 [2024-10-12 22:00:11.640552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.747 [2024-10-12 22:00:11.640560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 [2024-10-12 22:00:12.376490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 [2024-10-12 22:00:12.388808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.008 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.269 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.270 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.531 22:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.792 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.054 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.315 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.575 22:00:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:55.836 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.097 rmmod nvme_tcp 00:11:56.097 rmmod nvme_fabrics 00:11:56.097 rmmod nvme_keyring 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3387428 ']' 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3387428 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3387428 ']' 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3387428 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.097 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3387428 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3387428' 00:11:56.357 killing process with pid 3387428 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3387428 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3387428 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.357 22:00:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.902 00:11:58.902 real 0m13.248s 00:11:58.902 user 0m15.463s 00:11:58.902 sys 0m6.696s 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.902 ************************************ 00:11:58.902 END TEST nvmf_referrals 00:11:58.902 ************************************ 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.902 ************************************ 00:11:58.902 START TEST nvmf_connect_disconnect 00:11:58.902 ************************************ 00:11:58.902 22:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:58.902 * Looking for test storage... 00:11:58.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.902 --rc genhtml_branch_coverage=1 00:11:58.902 --rc genhtml_function_coverage=1 00:11:58.902 --rc genhtml_legend=1 00:11:58.902 --rc geninfo_all_blocks=1 00:11:58.902 --rc geninfo_unexecuted_blocks=1 00:11:58.902 00:11:58.902 ' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.902 --rc genhtml_branch_coverage=1 00:11:58.902 --rc genhtml_function_coverage=1 00:11:58.902 --rc genhtml_legend=1 00:11:58.902 --rc geninfo_all_blocks=1 00:11:58.902 --rc geninfo_unexecuted_blocks=1 00:11:58.902 00:11:58.902 ' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.902 --rc genhtml_branch_coverage=1 00:11:58.902 --rc genhtml_function_coverage=1 00:11:58.902 --rc genhtml_legend=1 00:11:58.902 --rc geninfo_all_blocks=1 00:11:58.902 --rc geninfo_unexecuted_blocks=1 00:11:58.902 00:11:58.902 ' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.902 --rc genhtml_branch_coverage=1 00:11:58.902 --rc genhtml_function_coverage=1 00:11:58.902 --rc genhtml_legend=1 00:11:58.902 --rc geninfo_all_blocks=1 00:11:58.902 --rc geninfo_unexecuted_blocks=1 00:11:58.902 00:11:58.902 ' 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.902 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:58.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:58.903 22:00:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:07.044 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:12:07.045 00:12:07.045 --- 10.0.0.2 ping statistics --- 00:12:07.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.045 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:12:07.045 00:12:07.045 --- 10.0.0.1 ping statistics --- 00:12:07.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.045 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:07.045 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3392257 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3392257 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3392257 ']' 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.046 22:00:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.046 [2024-10-12 22:00:24.701568] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:07.046 [2024-10-12 22:00:24.701638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.046 [2024-10-12 22:00:24.790669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.046 [2024-10-12 22:00:24.838237] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.046 [2024-10-12 22:00:24.838287] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.046 [2024-10-12 22:00:24.838295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.046 [2024-10-12 22:00:24.838302] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.046 [2024-10-12 22:00:24.838308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.046 [2024-10-12 22:00:24.838387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.046 [2024-10-12 22:00:24.838537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.046 [2024-10-12 22:00:24.838692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.046 [2024-10-12 22:00:24.838694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.046 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.046 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:07.046 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:07.046 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.046 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 [2024-10-12 22:00:25.576417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:07.308 [2024-10-12 22:00:25.645922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:07.308 22:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:09.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.322 [2024-10-12 22:01:46.611969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5fdc0 is same with the state(6) to be set 00:13:28.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.352 [2024-10-12 22:02:37.799171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b60ed0 is same with the state(6) to be set 00:14:19.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.041 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:00.041 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:00.041 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.042 rmmod nvme_tcp 00:16:00.042 rmmod nvme_fabrics 00:16:00.042 rmmod nvme_keyring 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3392257 ']' 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3392257 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3392257 ']' 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3392257 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392257 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392257' 00:16:00.042 killing process with pid 3392257 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3392257 00:16:00.042 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3392257 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.303 22:04:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.217 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:02.217 00:16:02.217 real 4m3.756s 00:16:02.217 user 15m27.993s 00:16:02.217 sys 0m25.116s 00:16:02.217 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.217 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:02.217 ************************************ 00:16:02.217 END TEST nvmf_connect_disconnect 00:16:02.217 ************************************ 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.479 ************************************ 00:16:02.479 START TEST nvmf_multitarget 00:16:02.479 ************************************ 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:02.479 * Looking for test storage... 00:16:02.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:02.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.479 --rc genhtml_branch_coverage=1 00:16:02.479 --rc genhtml_function_coverage=1 00:16:02.479 --rc genhtml_legend=1 00:16:02.479 --rc geninfo_all_blocks=1 00:16:02.479 --rc geninfo_unexecuted_blocks=1 00:16:02.479 00:16:02.479 ' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:02.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.479 --rc genhtml_branch_coverage=1 00:16:02.479 --rc genhtml_function_coverage=1 00:16:02.479 --rc genhtml_legend=1 00:16:02.479 --rc geninfo_all_blocks=1 00:16:02.479 --rc geninfo_unexecuted_blocks=1 00:16:02.479 00:16:02.479 ' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:02.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.479 --rc genhtml_branch_coverage=1 00:16:02.479 --rc genhtml_function_coverage=1 00:16:02.479 --rc genhtml_legend=1 00:16:02.479 --rc geninfo_all_blocks=1 00:16:02.479 --rc geninfo_unexecuted_blocks=1 00:16:02.479 00:16:02.479 ' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:02.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.479 --rc genhtml_branch_coverage=1 00:16:02.479 --rc genhtml_function_coverage=1 00:16:02.479 --rc genhtml_legend=1 00:16:02.479 --rc geninfo_all_blocks=1 00:16:02.479 --rc geninfo_unexecuted_blocks=1 00:16:02.479 00:16:02.479 ' 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.479 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:02.741 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.742 22:04:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.742 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:02.742 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:02.742 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:02.742 22:04:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:10.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:10.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:10.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:10.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.890 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:10.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:16:10.891 00:16:10.891 --- 10.0.0.2 ping statistics --- 00:16:10.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.891 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:16:10.891 00:16:10.891 --- 10.0.0.1 ping statistics --- 00:16:10.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.891 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3443855 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3443855 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3443855 ']' 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.891 22:04:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:10.891 [2024-10-12 22:04:28.547800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:10.891 [2024-10-12 22:04:28.547865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.891 [2024-10-12 22:04:28.636742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.891 [2024-10-12 22:04:28.683816] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.891 [2024-10-12 22:04:28.683872] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.891 [2024-10-12 22:04:28.683881] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.891 [2024-10-12 22:04:28.683888] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.891 [2024-10-12 22:04:28.683899] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.891 [2024-10-12 22:04:28.684051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.891 [2024-10-12 22:04:28.684210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.891 [2024-10-12 22:04:28.684258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.891 [2024-10-12 22:04:28.684258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.153 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:11.153 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:11.153 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:11.154 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:11.154 "nvmf_tgt_1" 00:16:11.415 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:11.415 "nvmf_tgt_2" 00:16:11.415 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.415 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:11.415 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:11.416 22:04:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:11.677 true 00:16:11.677 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:11.677 true 00:16:11.677 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:11.677 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:11.939 rmmod nvme_tcp 00:16:11.939 rmmod nvme_fabrics 00:16:11.939 rmmod nvme_keyring 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3443855 ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3443855 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3443855 ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3443855 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3443855 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3443855' 00:16:11.939 killing process with pid 3443855 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3443855 00:16:11.939 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3443855 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.200 22:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.119 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:14.119 00:16:14.119 real 0m11.833s 00:16:14.119 user 0m10.305s 00:16:14.119 sys 0m6.153s 00:16:14.119 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.119 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:14.119 ************************************ 00:16:14.119 END TEST nvmf_multitarget 00:16:14.119 ************************************ 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.381 ************************************ 00:16:14.381 START TEST nvmf_rpc 00:16:14.381 ************************************ 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:14.381 * Looking for test storage... 00:16:14.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.381 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.643 --rc genhtml_branch_coverage=1 00:16:14.643 --rc genhtml_function_coverage=1 00:16:14.643 --rc genhtml_legend=1 00:16:14.643 --rc geninfo_all_blocks=1 00:16:14.643 --rc geninfo_unexecuted_blocks=1 00:16:14.643 00:16:14.643 ' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.643 --rc genhtml_branch_coverage=1 00:16:14.643 --rc genhtml_function_coverage=1 00:16:14.643 --rc genhtml_legend=1 00:16:14.643 --rc geninfo_all_blocks=1 00:16:14.643 --rc geninfo_unexecuted_blocks=1 00:16:14.643 00:16:14.643 ' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.643 --rc genhtml_branch_coverage=1 00:16:14.643 --rc genhtml_function_coverage=1 00:16:14.643 --rc genhtml_legend=1 00:16:14.643 --rc geninfo_all_blocks=1 00:16:14.643 --rc geninfo_unexecuted_blocks=1 00:16:14.643 00:16:14.643 ' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:14.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.643 --rc genhtml_branch_coverage=1 00:16:14.643 --rc genhtml_function_coverage=1 00:16:14.643 --rc genhtml_legend=1 00:16:14.643 --rc geninfo_all_blocks=1 00:16:14.643 --rc geninfo_unexecuted_blocks=1 00:16:14.643 00:16:14.643 ' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.643 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.644 22:04:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:22.950 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:22.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:22.950 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.950 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:22.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:16:22.951 00:16:22.951 --- 10.0.0.2 ping statistics --- 00:16:22.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.951 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:16:22.951 00:16:22.951 --- 10.0.0.1 ping statistics --- 00:16:22.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.951 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3448540 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3448540 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3448540 ']' 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.951 22:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.951 [2024-10-12 22:04:40.527540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:22.951 [2024-10-12 22:04:40.527608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.951 [2024-10-12 22:04:40.619176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.951 [2024-10-12 22:04:40.667594] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.951 [2024-10-12 22:04:40.667653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.951 [2024-10-12 22:04:40.667661] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.951 [2024-10-12 22:04:40.667669] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.951 [2024-10-12 22:04:40.667675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.951 [2024-10-12 22:04:40.667828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.951 [2024-10-12 22:04:40.667984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.951 [2024-10-12 22:04:40.668195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.951 [2024-10-12 22:04:40.668209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:22.951 "tick_rate": 2400000000, 00:16:22.951 "poll_groups": [ 00:16:22.951 { 00:16:22.951 "name": "nvmf_tgt_poll_group_000", 00:16:22.951 "admin_qpairs": 0, 00:16:22.951 "io_qpairs": 0, 00:16:22.951 "current_admin_qpairs": 0, 00:16:22.951 "current_io_qpairs": 0, 00:16:22.951 "pending_bdev_io": 0, 00:16:22.951 "completed_nvme_io": 0, 00:16:22.951 "transports": [] 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": "nvmf_tgt_poll_group_001", 00:16:22.951 "admin_qpairs": 0, 00:16:22.951 "io_qpairs": 0, 00:16:22.951 "current_admin_qpairs": 0, 00:16:22.951 "current_io_qpairs": 0, 00:16:22.951 "pending_bdev_io": 0, 00:16:22.951 "completed_nvme_io": 0, 00:16:22.951 "transports": [] 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": "nvmf_tgt_poll_group_002", 00:16:22.951 "admin_qpairs": 0, 00:16:22.951 "io_qpairs": 0, 00:16:22.951 "current_admin_qpairs": 0, 00:16:22.951 "current_io_qpairs": 0, 00:16:22.951 "pending_bdev_io": 0, 00:16:22.951 "completed_nvme_io": 0, 00:16:22.951 "transports": [] 00:16:22.951 }, 00:16:22.951 { 00:16:22.951 "name": "nvmf_tgt_poll_group_003", 00:16:22.951 "admin_qpairs": 0, 00:16:22.951 "io_qpairs": 0, 00:16:22.951 "current_admin_qpairs": 0, 00:16:22.951 "current_io_qpairs": 0, 00:16:22.951 "pending_bdev_io": 0, 00:16:22.951 "completed_nvme_io": 0, 00:16:22.951 "transports": [] 00:16:22.951 } 00:16:22.951 ] 00:16:22.951 }' 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:22.951 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.214 [2024-10-12 22:04:41.522358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:23.214 "tick_rate": 2400000000, 00:16:23.214 "poll_groups": [ 00:16:23.214 { 00:16:23.214 "name": "nvmf_tgt_poll_group_000", 00:16:23.214 "admin_qpairs": 0, 00:16:23.214 "io_qpairs": 0, 00:16:23.214 "current_admin_qpairs": 0, 00:16:23.214 "current_io_qpairs": 0, 00:16:23.214 "pending_bdev_io": 0, 00:16:23.214 "completed_nvme_io": 0, 00:16:23.214 "transports": [ 00:16:23.214 { 00:16:23.214 "trtype": "TCP" 00:16:23.214 } 00:16:23.214 ] 00:16:23.214 }, 00:16:23.214 { 00:16:23.214 "name": "nvmf_tgt_poll_group_001", 00:16:23.214 "admin_qpairs": 0, 00:16:23.214 "io_qpairs": 0, 00:16:23.214 "current_admin_qpairs": 0, 00:16:23.214 "current_io_qpairs": 0, 00:16:23.214 "pending_bdev_io": 0, 00:16:23.214 "completed_nvme_io": 0, 00:16:23.214 "transports": [ 00:16:23.214 { 00:16:23.214 "trtype": "TCP" 00:16:23.214 } 00:16:23.214 ] 00:16:23.214 }, 00:16:23.214 { 00:16:23.214 "name": "nvmf_tgt_poll_group_002", 00:16:23.214 "admin_qpairs": 0, 00:16:23.214 "io_qpairs": 0, 00:16:23.214 "current_admin_qpairs": 0, 00:16:23.214 "current_io_qpairs": 0, 00:16:23.214 "pending_bdev_io": 0, 00:16:23.214 "completed_nvme_io": 0, 00:16:23.214 "transports": [ 00:16:23.214 { 00:16:23.214 "trtype": "TCP" 00:16:23.214 } 00:16:23.214 ] 00:16:23.214 }, 00:16:23.214 { 00:16:23.214 "name": "nvmf_tgt_poll_group_003", 00:16:23.214 "admin_qpairs": 0, 00:16:23.214 "io_qpairs": 0, 00:16:23.214 "current_admin_qpairs": 0, 00:16:23.214 "current_io_qpairs": 0, 00:16:23.214 "pending_bdev_io": 0, 00:16:23.214 "completed_nvme_io": 0, 00:16:23.214 "transports": [ 00:16:23.214 { 00:16:23.214 "trtype": "TCP" 00:16:23.214 } 00:16:23.214 ] 00:16:23.214 } 00:16:23.214 ] 00:16:23.214 }' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.214 Malloc1 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.214 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.215 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.215 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.215 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.215 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.477 [2024-10-12 22:04:41.704586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:16:23.477 [2024-10-12 22:04:41.741682] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:23.477 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:23.477 could not add new controller: failed to write to nvme-fabrics device 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.477 22:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.864 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.864 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.864 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.864 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.864 22:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.411 [2024-10-12 22:04:45.487478] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:16:27.411 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:27.411 could not add new controller: failed to write to nvme-fabrics device 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.411 22:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:28.796 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:28.796 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.796 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.796 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.796 22:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.708 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 [2024-10-12 22:04:49.231144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.969 22:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.354 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:32.354 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:32.354 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.354 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:32.354 22:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:34.269 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:34.269 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:34.269 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 [2024-10-12 22:04:52.956573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.530 22:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.442 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.442 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:36.442 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.442 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:36.442 22:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 [2024-10-12 22:04:56.674045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.357 22:04:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.741 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:39.741 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.741 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.741 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:39.741 22:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:42.284 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 [2024-10-12 22:05:00.497219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.285 22:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.670 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.670 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.670 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.670 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:43.670 22:05:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:45.585 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 [2024-10-12 22:05:04.286508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.846 22:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.759 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.759 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.759 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.759 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.759 22:05:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.692 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.692 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.692 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.693 22:05:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.693 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 [2024-10-12 22:05:08.064572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.694 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.695 [2024-10-12 22:05:08.132710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.695 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.962 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 [2024-10-12 22:05:08.200911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 [2024-10-12 22:05:08.269126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 [2024-10-12 22:05:08.337340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:49.963 "tick_rate": 2400000000, 00:16:49.963 "poll_groups": [ 00:16:49.963 { 00:16:49.963 "name": "nvmf_tgt_poll_group_000", 00:16:49.963 "admin_qpairs": 0, 00:16:49.963 "io_qpairs": 224, 00:16:49.963 "current_admin_qpairs": 0, 00:16:49.963 "current_io_qpairs": 0, 00:16:49.963 "pending_bdev_io": 0, 00:16:49.963 "completed_nvme_io": 227, 00:16:49.963 "transports": [ 00:16:49.963 { 00:16:49.963 "trtype": "TCP" 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "nvmf_tgt_poll_group_001", 00:16:49.963 "admin_qpairs": 1, 00:16:49.963 "io_qpairs": 223, 00:16:49.963 "current_admin_qpairs": 0, 00:16:49.963 "current_io_qpairs": 0, 00:16:49.963 "pending_bdev_io": 0, 00:16:49.963 "completed_nvme_io": 276, 00:16:49.963 "transports": [ 00:16:49.963 { 00:16:49.963 "trtype": "TCP" 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "nvmf_tgt_poll_group_002", 00:16:49.963 "admin_qpairs": 6, 00:16:49.963 "io_qpairs": 218, 00:16:49.963 "current_admin_qpairs": 0, 00:16:49.963 "current_io_qpairs": 0, 00:16:49.963 "pending_bdev_io": 0, 00:16:49.963 "completed_nvme_io": 512, 00:16:49.963 "transports": [ 00:16:49.963 { 00:16:49.963 "trtype": "TCP" 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 }, 00:16:49.963 { 00:16:49.963 "name": "nvmf_tgt_poll_group_003", 00:16:49.963 "admin_qpairs": 0, 00:16:49.963 "io_qpairs": 224, 00:16:49.963 "current_admin_qpairs": 0, 00:16:49.963 "current_io_qpairs": 0, 00:16:49.963 "pending_bdev_io": 0, 00:16:49.963 "completed_nvme_io": 224, 00:16:49.963 "transports": [ 00:16:49.963 { 00:16:49.963 "trtype": "TCP" 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 } 00:16:49.963 ] 00:16:49.963 }' 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:49.963 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:50.224 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:50.224 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:50.224 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.225 rmmod nvme_tcp 00:16:50.225 rmmod nvme_fabrics 00:16:50.225 rmmod nvme_keyring 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3448540 ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3448540 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3448540 ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3448540 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3448540 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3448540' 00:16:50.225 killing process with pid 3448540 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3448540 00:16:50.225 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3448540 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.486 22:05:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.401 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:52.401 00:16:52.401 real 0m38.184s 00:16:52.401 user 1m54.431s 00:16:52.401 sys 0m7.886s 00:16:52.401 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.401 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.401 ************************************ 00:16:52.401 END TEST nvmf_rpc 00:16:52.401 ************************************ 00:16:52.663 22:05:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:52.663 22:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:52.663 22:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.663 22:05:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.663 ************************************ 00:16:52.663 START TEST nvmf_invalid 00:16:52.663 ************************************ 00:16:52.663 22:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:52.663 * Looking for test storage... 00:16:52.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.663 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:52.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.924 --rc genhtml_branch_coverage=1 00:16:52.924 --rc genhtml_function_coverage=1 00:16:52.924 --rc genhtml_legend=1 00:16:52.924 --rc geninfo_all_blocks=1 00:16:52.924 --rc geninfo_unexecuted_blocks=1 00:16:52.924 00:16:52.924 ' 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:52.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.924 --rc genhtml_branch_coverage=1 00:16:52.924 --rc genhtml_function_coverage=1 00:16:52.924 --rc genhtml_legend=1 00:16:52.924 --rc geninfo_all_blocks=1 00:16:52.924 --rc geninfo_unexecuted_blocks=1 00:16:52.924 00:16:52.924 ' 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:52.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.924 --rc genhtml_branch_coverage=1 00:16:52.924 --rc genhtml_function_coverage=1 00:16:52.924 --rc genhtml_legend=1 00:16:52.924 --rc geninfo_all_blocks=1 00:16:52.924 --rc geninfo_unexecuted_blocks=1 00:16:52.924 00:16:52.924 ' 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:52.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.924 --rc genhtml_branch_coverage=1 00:16:52.924 --rc genhtml_function_coverage=1 00:16:52.924 --rc genhtml_legend=1 00:16:52.924 --rc geninfo_all_blocks=1 00:16:52.924 --rc geninfo_unexecuted_blocks=1 00:16:52.924 00:16:52.924 ' 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.924 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:52.925 22:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.071 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:01.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:01.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:01.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:01.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:01.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:17:01.072 00:17:01.072 --- 10.0.0.2 ping statistics --- 00:17:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.072 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:17:01.072 00:17:01.072 --- 10.0.0.1 ping statistics --- 00:17:01.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.072 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3458282 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3458282 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3458282 ']' 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.072 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.073 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.073 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.073 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.073 22:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.073 [2024-10-12 22:05:18.780298] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:01.073 [2024-10-12 22:05:18.780364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.073 [2024-10-12 22:05:18.874453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.073 [2024-10-12 22:05:18.922539] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.073 [2024-10-12 22:05:18.922594] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.073 [2024-10-12 22:05:18.922602] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.073 [2024-10-12 22:05:18.922610] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.073 [2024-10-12 22:05:18.922616] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.073 [2024-10-12 22:05:18.922812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.073 [2024-10-12 22:05:18.922950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.073 [2024-10-12 22:05:18.923117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.073 [2024-10-12 22:05:18.923128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:01.334 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8282 00:17:01.334 [2024-10-12 22:05:19.811941] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:01.595 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:01.595 { 00:17:01.595 "nqn": "nqn.2016-06.io.spdk:cnode8282", 00:17:01.595 "tgt_name": "foobar", 00:17:01.595 "method": "nvmf_create_subsystem", 00:17:01.595 "req_id": 1 00:17:01.595 } 00:17:01.595 Got JSON-RPC error response 00:17:01.595 response: 00:17:01.595 { 00:17:01.595 "code": -32603, 00:17:01.595 "message": "Unable to find target foobar" 00:17:01.595 }' 00:17:01.595 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:01.595 { 00:17:01.595 "nqn": "nqn.2016-06.io.spdk:cnode8282", 00:17:01.595 "tgt_name": "foobar", 00:17:01.595 "method": "nvmf_create_subsystem", 00:17:01.595 "req_id": 1 00:17:01.595 } 00:17:01.595 Got JSON-RPC error response 00:17:01.595 response: 00:17:01.595 { 00:17:01.595 "code": -32603, 00:17:01.595 "message": "Unable to find target foobar" 00:17:01.595 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:01.595 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:01.595 22:05:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7603 00:17:01.595 [2024-10-12 22:05:20.020817] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7603: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:01.595 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:01.595 { 00:17:01.595 "nqn": "nqn.2016-06.io.spdk:cnode7603", 00:17:01.595 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:01.595 "method": "nvmf_create_subsystem", 00:17:01.595 "req_id": 1 00:17:01.595 } 00:17:01.595 Got JSON-RPC error response 00:17:01.595 response: 00:17:01.595 { 00:17:01.595 "code": -32602, 00:17:01.595 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:01.595 }' 00:17:01.595 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:01.595 { 00:17:01.595 "nqn": "nqn.2016-06.io.spdk:cnode7603", 00:17:01.595 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:01.595 "method": "nvmf_create_subsystem", 00:17:01.595 "req_id": 1 00:17:01.595 } 00:17:01.595 Got JSON-RPC error response 00:17:01.595 response: 00:17:01.595 { 00:17:01.595 "code": -32602, 00:17:01.595 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:01.595 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:01.595 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:01.595 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8 00:17:01.857 [2024-10-12 22:05:20.229577] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8: invalid model number 'SPDK_Controller' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:01.857 { 00:17:01.857 "nqn": "nqn.2016-06.io.spdk:cnode8", 00:17:01.857 "model_number": "SPDK_Controller\u001f", 00:17:01.857 "method": "nvmf_create_subsystem", 00:17:01.857 "req_id": 1 00:17:01.857 } 00:17:01.857 Got JSON-RPC error response 00:17:01.857 response: 00:17:01.857 { 00:17:01.857 "code": -32602, 00:17:01.857 "message": "Invalid MN SPDK_Controller\u001f" 00:17:01.857 }' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:01.857 { 00:17:01.857 "nqn": "nqn.2016-06.io.spdk:cnode8", 00:17:01.857 "model_number": "SPDK_Controller\u001f", 00:17:01.857 "method": "nvmf_create_subsystem", 00:17:01.857 "req_id": 1 00:17:01.857 } 00:17:01.857 Got JSON-RPC error response 00:17:01.857 response: 00:17:01.857 { 00:17:01.857 "code": -32602, 00:17:01.857 "message": "Invalid MN SPDK_Controller\u001f" 00:17:01.857 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.857 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.119 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:17:02.120 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'LEG>;gFzg[Tq+vNAr*@0k' 00:17:02.120 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'LEG>;gFzg[Tq+vNAr*@0k' nqn.2016-06.io.spdk:cnode15512 00:17:02.381 [2024-10-12 22:05:20.615045] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15512: invalid serial number 'LEG>;gFzg[Tq+vNAr*@0k' 00:17:02.381 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:02.381 { 00:17:02.381 "nqn": "nqn.2016-06.io.spdk:cnode15512", 00:17:02.381 "serial_number": "LEG>;gFzg[Tq+vNAr*@0k", 00:17:02.381 "method": "nvmf_create_subsystem", 00:17:02.381 "req_id": 1 00:17:02.381 } 00:17:02.381 Got JSON-RPC error response 00:17:02.381 response: 00:17:02.381 { 00:17:02.381 "code": -32602, 00:17:02.381 "message": "Invalid SN LEG>;gFzg[Tq+vNAr*@0k" 00:17:02.381 }' 00:17:02.381 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:02.381 { 00:17:02.381 "nqn": "nqn.2016-06.io.spdk:cnode15512", 00:17:02.381 "serial_number": "LEG>;gFzg[Tq+vNAr*@0k", 00:17:02.381 "method": "nvmf_create_subsystem", 00:17:02.381 "req_id": 1 00:17:02.381 } 00:17:02.381 Got JSON-RPC error response 00:17:02.381 response: 00:17:02.381 { 00:17:02.381 "code": -32602, 00:17:02.381 "message": "Invalid SN LEG>;gFzg[Tq+vNAr*@0k" 00:17:02.382 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:02.382 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.383 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.644 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hGNZ'\''(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%' 00:17:02.645 22:05:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'hGNZ'\''(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%' nqn.2016-06.io.spdk:cnode18726 00:17:02.906 [2024-10-12 22:05:21.153050] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18726: invalid model number 'hGNZ'(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%' 00:17:02.906 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:02.906 { 00:17:02.906 "nqn": "nqn.2016-06.io.spdk:cnode18726", 00:17:02.906 "model_number": "hGNZ'\''(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%", 00:17:02.906 "method": "nvmf_create_subsystem", 00:17:02.906 "req_id": 1 00:17:02.906 } 00:17:02.906 Got JSON-RPC error response 00:17:02.906 response: 00:17:02.906 { 00:17:02.906 "code": -32602, 00:17:02.906 "message": "Invalid MN hGNZ'\''(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%" 00:17:02.906 }' 00:17:02.906 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:02.906 { 00:17:02.906 "nqn": "nqn.2016-06.io.spdk:cnode18726", 00:17:02.906 "model_number": "hGNZ'(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%", 00:17:02.906 "method": "nvmf_create_subsystem", 00:17:02.906 "req_id": 1 00:17:02.906 } 00:17:02.906 Got JSON-RPC error response 00:17:02.906 response: 00:17:02.906 { 00:17:02.906 "code": -32602, 00:17:02.906 "message": "Invalid MN hGNZ'(rA*1?O,pLf_Jf}L%@i-9x2;qH40MR;|5IL%" 00:17:02.906 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:02.906 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:02.906 [2024-10-12 22:05:21.353910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.906 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:03.167 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:03.167 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:03.167 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:03.167 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:03.167 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:03.428 [2024-10-12 22:05:21.767525] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:03.428 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:03.428 { 00:17:03.428 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:03.428 "listen_address": { 00:17:03.428 "trtype": "tcp", 00:17:03.428 "traddr": "", 00:17:03.428 "trsvcid": "4421" 00:17:03.428 }, 00:17:03.428 "method": "nvmf_subsystem_remove_listener", 00:17:03.428 "req_id": 1 00:17:03.428 } 00:17:03.428 Got JSON-RPC error response 00:17:03.428 response: 00:17:03.428 { 00:17:03.428 "code": -32602, 00:17:03.428 "message": "Invalid parameters" 00:17:03.428 }' 00:17:03.428 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:03.428 { 00:17:03.428 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:03.428 "listen_address": { 00:17:03.428 "trtype": "tcp", 00:17:03.428 "traddr": "", 00:17:03.428 "trsvcid": "4421" 00:17:03.428 }, 00:17:03.428 "method": "nvmf_subsystem_remove_listener", 00:17:03.428 "req_id": 1 00:17:03.428 } 00:17:03.428 Got JSON-RPC error response 00:17:03.428 response: 00:17:03.428 { 00:17:03.428 "code": -32602, 00:17:03.428 "message": "Invalid parameters" 00:17:03.428 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:03.428 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11828 -i 0 00:17:03.688 [2024-10-12 22:05:21.964131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11828: invalid cntlid range [0-65519] 00:17:03.688 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:03.688 { 00:17:03.688 "nqn": "nqn.2016-06.io.spdk:cnode11828", 00:17:03.688 "min_cntlid": 0, 00:17:03.688 "method": "nvmf_create_subsystem", 00:17:03.688 "req_id": 1 00:17:03.688 } 00:17:03.688 Got JSON-RPC error response 00:17:03.688 response: 00:17:03.688 { 00:17:03.688 "code": -32602, 00:17:03.688 "message": "Invalid cntlid range [0-65519]" 00:17:03.688 }' 00:17:03.688 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:03.688 { 00:17:03.688 "nqn": "nqn.2016-06.io.spdk:cnode11828", 00:17:03.688 "min_cntlid": 0, 00:17:03.688 "method": "nvmf_create_subsystem", 00:17:03.688 "req_id": 1 00:17:03.688 } 00:17:03.688 Got JSON-RPC error response 00:17:03.688 response: 00:17:03.688 { 00:17:03.688 "code": -32602, 00:17:03.688 "message": "Invalid cntlid range [0-65519]" 00:17:03.688 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.688 22:05:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18443 -i 65520 00:17:03.688 [2024-10-12 22:05:22.152735] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18443: invalid cntlid range [65520-65519] 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:03.949 { 00:17:03.949 "nqn": "nqn.2016-06.io.spdk:cnode18443", 00:17:03.949 "min_cntlid": 65520, 00:17:03.949 "method": "nvmf_create_subsystem", 00:17:03.949 "req_id": 1 00:17:03.949 } 00:17:03.949 Got JSON-RPC error response 00:17:03.949 response: 00:17:03.949 { 00:17:03.949 "code": -32602, 00:17:03.949 "message": "Invalid cntlid range [65520-65519]" 00:17:03.949 }' 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:03.949 { 00:17:03.949 "nqn": "nqn.2016-06.io.spdk:cnode18443", 00:17:03.949 "min_cntlid": 65520, 00:17:03.949 "method": "nvmf_create_subsystem", 00:17:03.949 "req_id": 1 00:17:03.949 } 00:17:03.949 Got JSON-RPC error response 00:17:03.949 response: 00:17:03.949 { 00:17:03.949 "code": -32602, 00:17:03.949 "message": "Invalid cntlid range [65520-65519]" 00:17:03.949 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15516 -I 0 00:17:03.949 [2024-10-12 22:05:22.341314] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15516: invalid cntlid range [1-0] 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:03.949 { 00:17:03.949 "nqn": "nqn.2016-06.io.spdk:cnode15516", 00:17:03.949 "max_cntlid": 0, 00:17:03.949 "method": "nvmf_create_subsystem", 00:17:03.949 "req_id": 1 00:17:03.949 } 00:17:03.949 Got JSON-RPC error response 00:17:03.949 response: 00:17:03.949 { 00:17:03.949 "code": -32602, 00:17:03.949 "message": "Invalid cntlid range [1-0]" 00:17:03.949 }' 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:03.949 { 00:17:03.949 "nqn": "nqn.2016-06.io.spdk:cnode15516", 00:17:03.949 "max_cntlid": 0, 00:17:03.949 "method": "nvmf_create_subsystem", 00:17:03.949 "req_id": 1 00:17:03.949 } 00:17:03.949 Got JSON-RPC error response 00:17:03.949 response: 00:17:03.949 { 00:17:03.949 "code": -32602, 00:17:03.949 "message": "Invalid cntlid range [1-0]" 00:17:03.949 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.949 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20656 -I 65520 00:17:04.209 [2024-10-12 22:05:22.529889] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20656: invalid cntlid range [1-65520] 00:17:04.209 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:04.210 { 00:17:04.210 "nqn": "nqn.2016-06.io.spdk:cnode20656", 00:17:04.210 "max_cntlid": 65520, 00:17:04.210 "method": "nvmf_create_subsystem", 00:17:04.210 "req_id": 1 00:17:04.210 } 00:17:04.210 Got JSON-RPC error response 00:17:04.210 response: 00:17:04.210 { 00:17:04.210 "code": -32602, 00:17:04.210 "message": "Invalid cntlid range [1-65520]" 00:17:04.210 }' 00:17:04.210 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:04.210 { 00:17:04.210 "nqn": "nqn.2016-06.io.spdk:cnode20656", 00:17:04.210 "max_cntlid": 65520, 00:17:04.210 "method": "nvmf_create_subsystem", 00:17:04.210 "req_id": 1 00:17:04.210 } 00:17:04.210 Got JSON-RPC error response 00:17:04.210 response: 00:17:04.210 { 00:17:04.210 "code": -32602, 00:17:04.210 "message": "Invalid cntlid range [1-65520]" 00:17:04.210 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.210 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18915 -i 6 -I 5 00:17:04.470 [2024-10-12 22:05:22.714502] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18915: invalid cntlid range [6-5] 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:04.470 { 00:17:04.470 "nqn": "nqn.2016-06.io.spdk:cnode18915", 00:17:04.470 "min_cntlid": 6, 00:17:04.470 "max_cntlid": 5, 00:17:04.470 "method": "nvmf_create_subsystem", 00:17:04.470 "req_id": 1 00:17:04.470 } 00:17:04.470 Got JSON-RPC error response 00:17:04.470 response: 00:17:04.470 { 00:17:04.470 "code": -32602, 00:17:04.470 "message": "Invalid cntlid range [6-5]" 00:17:04.470 }' 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:04.470 { 00:17:04.470 "nqn": "nqn.2016-06.io.spdk:cnode18915", 00:17:04.470 "min_cntlid": 6, 00:17:04.470 "max_cntlid": 5, 00:17:04.470 "method": "nvmf_create_subsystem", 00:17:04.470 "req_id": 1 00:17:04.470 } 00:17:04.470 Got JSON-RPC error response 00:17:04.470 response: 00:17:04.470 { 00:17:04.470 "code": -32602, 00:17:04.470 "message": "Invalid cntlid range [6-5]" 00:17:04.470 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:04.470 { 00:17:04.470 "name": "foobar", 00:17:04.470 "method": "nvmf_delete_target", 00:17:04.470 "req_id": 1 00:17:04.470 } 00:17:04.470 Got JSON-RPC error response 00:17:04.470 response: 00:17:04.470 { 00:17:04.470 "code": -32602, 00:17:04.470 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:04.470 }' 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:04.470 { 00:17:04.470 "name": "foobar", 00:17:04.470 "method": "nvmf_delete_target", 00:17:04.470 "req_id": 1 00:17:04.470 } 00:17:04.470 Got JSON-RPC error response 00:17:04.470 response: 00:17:04.470 { 00:17:04.470 "code": -32602, 00:17:04.470 "message": "The specified target doesn't exist, cannot delete it." 00:17:04.470 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.470 rmmod nvme_tcp 00:17:04.470 rmmod nvme_fabrics 00:17:04.470 rmmod nvme_keyring 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@513 -- # '[' -n 3458282 ']' 00:17:04.470 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # killprocess 3458282 00:17:04.471 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3458282 ']' 00:17:04.471 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3458282 00:17:04.471 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:04.471 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.471 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3458282 00:17:04.731 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.731 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.731 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3458282' 00:17:04.731 killing process with pid 3458282 00:17:04.731 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3458282 00:17:04.731 22:05:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3458282 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-save 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@787 -- # iptables-restore 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.731 22:05:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.279 00:17:07.279 real 0m14.226s 00:17:07.279 user 0m21.297s 00:17:07.279 sys 0m6.746s 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.279 ************************************ 00:17:07.279 END TEST nvmf_invalid 00:17:07.279 ************************************ 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.279 ************************************ 00:17:07.279 START TEST nvmf_connect_stress 00:17:07.279 ************************************ 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.279 * Looking for test storage... 00:17:07.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.279 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:07.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.280 --rc genhtml_branch_coverage=1 00:17:07.280 --rc genhtml_function_coverage=1 00:17:07.280 --rc genhtml_legend=1 00:17:07.280 --rc geninfo_all_blocks=1 00:17:07.280 --rc geninfo_unexecuted_blocks=1 00:17:07.280 00:17:07.280 ' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.280 22:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:15.422 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:15.422 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:15.422 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:15.422 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.422 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:17:15.423 00:17:15.423 --- 10.0.0.2 ping statistics --- 00:17:15.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.423 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:15.423 00:17:15.423 --- 10.0.0.1 ping statistics --- 00:17:15.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.423 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3463451 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3463451 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3463451 ']' 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.423 22:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.423 [2024-10-12 22:05:33.003134] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:15.423 [2024-10-12 22:05:33.003203] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.423 [2024-10-12 22:05:33.093576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.423 [2024-10-12 22:05:33.141760] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.423 [2024-10-12 22:05:33.141818] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.423 [2024-10-12 22:05:33.141827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.423 [2024-10-12 22:05:33.141834] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.423 [2024-10-12 22:05:33.141840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.423 [2024-10-12 22:05:33.142002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.423 [2024-10-12 22:05:33.142163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.423 [2024-10-12 22:05:33.142164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.423 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.423 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:15.423 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:15.423 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:15.423 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 [2024-10-12 22:05:33.932496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 [2024-10-12 22:05:33.967920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 NULL1 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3463618 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.685 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:15.686 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:15.686 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:15.686 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.686 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.686 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.947 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.947 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:15.947 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.947 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.947 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.518 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.518 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:16.518 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.518 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.518 22:05:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.783 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.783 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:16.783 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.783 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.783 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.043 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.043 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:17.043 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.043 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.043 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.303 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.303 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:17.303 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.303 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.303 22:05:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.563 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.563 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:17.563 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.563 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.563 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.160 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.160 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:18.160 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.160 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.160 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.447 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.447 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:18.447 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.447 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.447 22:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.732 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.732 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:18.732 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.732 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.732 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.994 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.994 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:18.994 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.994 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.994 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.255 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.255 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:19.255 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.256 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.256 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.516 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.516 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:19.516 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.516 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.516 22:05:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.085 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.085 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:20.085 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.085 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.085 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.346 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.346 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:20.346 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.346 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.346 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.607 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.607 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:20.607 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.607 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.607 22:05:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.867 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.867 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:20.867 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.867 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.867 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.438 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.438 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:21.438 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.438 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.438 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.699 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.699 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:21.699 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.699 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.699 22:05:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.960 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.960 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:21.960 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.960 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.960 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.221 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.221 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:22.221 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.221 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.221 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.482 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.482 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:22.482 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.482 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.482 22:05:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.053 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.053 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:23.053 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.053 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.053 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.314 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.314 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:23.314 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.314 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.314 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.575 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.575 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:23.575 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.575 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.575 22:05:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.835 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.835 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:23.835 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.835 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.835 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.096 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.096 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:24.096 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.096 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.096 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.668 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.668 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:24.668 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.668 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.668 22:05:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.929 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.929 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:24.929 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.929 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.929 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.191 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.191 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:25.191 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.191 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.191 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.452 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.452 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:25.452 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.452 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.452 22:05:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.713 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3463618 00:17:25.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3463618) - No such process 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3463618 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.713 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.713 rmmod nvme_tcp 00:17:25.975 rmmod nvme_fabrics 00:17:25.975 rmmod nvme_keyring 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3463451 ']' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3463451 ']' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3463451' 00:17:25.975 killing process with pid 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3463451 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.975 22:05:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.523 00:17:28.523 real 0m21.273s 00:17:28.523 user 0m42.322s 00:17:28.523 sys 0m9.317s 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.523 ************************************ 00:17:28.523 END TEST nvmf_connect_stress 00:17:28.523 ************************************ 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.523 ************************************ 00:17:28.523 START TEST nvmf_fused_ordering 00:17:28.523 ************************************ 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:28.523 * Looking for test storage... 00:17:28.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:28.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.523 --rc genhtml_branch_coverage=1 00:17:28.523 --rc genhtml_function_coverage=1 00:17:28.523 --rc genhtml_legend=1 00:17:28.523 --rc geninfo_all_blocks=1 00:17:28.523 --rc geninfo_unexecuted_blocks=1 00:17:28.523 00:17:28.523 ' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:28.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.523 --rc genhtml_branch_coverage=1 00:17:28.523 --rc genhtml_function_coverage=1 00:17:28.523 --rc genhtml_legend=1 00:17:28.523 --rc geninfo_all_blocks=1 00:17:28.523 --rc geninfo_unexecuted_blocks=1 00:17:28.523 00:17:28.523 ' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:28.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.523 --rc genhtml_branch_coverage=1 00:17:28.523 --rc genhtml_function_coverage=1 00:17:28.523 --rc genhtml_legend=1 00:17:28.523 --rc geninfo_all_blocks=1 00:17:28.523 --rc geninfo_unexecuted_blocks=1 00:17:28.523 00:17:28.523 ' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:28.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.523 --rc genhtml_branch_coverage=1 00:17:28.523 --rc genhtml_function_coverage=1 00:17:28.523 --rc genhtml_legend=1 00:17:28.523 --rc geninfo_all_blocks=1 00:17:28.523 --rc geninfo_unexecuted_blocks=1 00:17:28.523 00:17:28.523 ' 00:17:28.523 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.524 22:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:36.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:36.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:36.672 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:36.672 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.672 22:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.672 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.672 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.672 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.672 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:17:36.673 00:17:36.673 --- 10.0.0.2 ping statistics --- 00:17:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.673 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:17:36.673 00:17:36.673 --- 10.0.0.1 ping statistics --- 00:17:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.673 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3469978 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3469978 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3469978 ']' 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.673 22:05:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.673 [2024-10-12 22:05:54.411282] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:36.673 [2024-10-12 22:05:54.411347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.673 [2024-10-12 22:05:54.500767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.673 [2024-10-12 22:05:54.547411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.673 [2024-10-12 22:05:54.547460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.673 [2024-10-12 22:05:54.547468] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.673 [2024-10-12 22:05:54.547475] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.673 [2024-10-12 22:05:54.547482] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.673 [2024-10-12 22:05:54.547504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 [2024-10-12 22:05:55.289270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.934 [2024-10-12 22:05:55.313500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.934 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.935 NULL1 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.935 22:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:36.935 [2024-10-12 22:05:55.382344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:36.935 [2024-10-12 22:05:55.382396] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470028 ] 00:17:37.508 Attached to nqn.2016-06.io.spdk:cnode1 00:17:37.508 Namespace ID: 1 size: 1GB 00:17:37.508 fused_ordering(0) 00:17:37.508 fused_ordering(1) 00:17:37.508 fused_ordering(2) 00:17:37.508 fused_ordering(3) 00:17:37.508 fused_ordering(4) 00:17:37.508 fused_ordering(5) 00:17:37.508 fused_ordering(6) 00:17:37.508 fused_ordering(7) 00:17:37.508 fused_ordering(8) 00:17:37.508 fused_ordering(9) 00:17:37.508 fused_ordering(10) 00:17:37.508 fused_ordering(11) 00:17:37.508 fused_ordering(12) 00:17:37.508 fused_ordering(13) 00:17:37.508 fused_ordering(14) 00:17:37.508 fused_ordering(15) 00:17:37.508 fused_ordering(16) 00:17:37.508 fused_ordering(17) 00:17:37.508 fused_ordering(18) 00:17:37.508 fused_ordering(19) 00:17:37.508 fused_ordering(20) 00:17:37.508 fused_ordering(21) 00:17:37.508 fused_ordering(22) 00:17:37.508 fused_ordering(23) 00:17:37.508 fused_ordering(24) 00:17:37.508 fused_ordering(25) 00:17:37.508 fused_ordering(26) 00:17:37.508 fused_ordering(27) 00:17:37.508 fused_ordering(28) 00:17:37.508 fused_ordering(29) 00:17:37.508 fused_ordering(30) 00:17:37.508 fused_ordering(31) 00:17:37.508 fused_ordering(32) 00:17:37.508 fused_ordering(33) 00:17:37.508 fused_ordering(34) 00:17:37.508 fused_ordering(35) 00:17:37.508 fused_ordering(36) 00:17:37.508 fused_ordering(37) 00:17:37.508 fused_ordering(38) 00:17:37.508 fused_ordering(39) 00:17:37.508 fused_ordering(40) 00:17:37.508 fused_ordering(41) 00:17:37.508 fused_ordering(42) 00:17:37.508 fused_ordering(43) 00:17:37.508 fused_ordering(44) 00:17:37.508 fused_ordering(45) 00:17:37.508 fused_ordering(46) 00:17:37.508 fused_ordering(47) 00:17:37.508 fused_ordering(48) 00:17:37.508 fused_ordering(49) 00:17:37.508 fused_ordering(50) 00:17:37.508 fused_ordering(51) 00:17:37.508 fused_ordering(52) 00:17:37.508 fused_ordering(53) 00:17:37.508 fused_ordering(54) 00:17:37.508 fused_ordering(55) 00:17:37.508 fused_ordering(56) 00:17:37.508 fused_ordering(57) 00:17:37.508 fused_ordering(58) 00:17:37.508 fused_ordering(59) 00:17:37.508 fused_ordering(60) 00:17:37.508 fused_ordering(61) 00:17:37.508 fused_ordering(62) 00:17:37.508 fused_ordering(63) 00:17:37.508 fused_ordering(64) 00:17:37.508 fused_ordering(65) 00:17:37.508 fused_ordering(66) 00:17:37.508 fused_ordering(67) 00:17:37.508 fused_ordering(68) 00:17:37.508 fused_ordering(69) 00:17:37.508 fused_ordering(70) 00:17:37.508 fused_ordering(71) 00:17:37.508 fused_ordering(72) 00:17:37.508 fused_ordering(73) 00:17:37.508 fused_ordering(74) 00:17:37.508 fused_ordering(75) 00:17:37.508 fused_ordering(76) 00:17:37.508 fused_ordering(77) 00:17:37.508 fused_ordering(78) 00:17:37.508 fused_ordering(79) 00:17:37.508 fused_ordering(80) 00:17:37.508 fused_ordering(81) 00:17:37.508 fused_ordering(82) 00:17:37.508 fused_ordering(83) 00:17:37.508 fused_ordering(84) 00:17:37.508 fused_ordering(85) 00:17:37.508 fused_ordering(86) 00:17:37.508 fused_ordering(87) 00:17:37.508 fused_ordering(88) 00:17:37.508 fused_ordering(89) 00:17:37.508 fused_ordering(90) 00:17:37.508 fused_ordering(91) 00:17:37.508 fused_ordering(92) 00:17:37.508 fused_ordering(93) 00:17:37.508 fused_ordering(94) 00:17:37.508 fused_ordering(95) 00:17:37.508 fused_ordering(96) 00:17:37.508 fused_ordering(97) 00:17:37.508 fused_ordering(98) 00:17:37.508 fused_ordering(99) 00:17:37.508 fused_ordering(100) 00:17:37.508 fused_ordering(101) 00:17:37.508 fused_ordering(102) 00:17:37.508 fused_ordering(103) 00:17:37.508 fused_ordering(104) 00:17:37.508 fused_ordering(105) 00:17:37.508 fused_ordering(106) 00:17:37.508 fused_ordering(107) 00:17:37.508 fused_ordering(108) 00:17:37.508 fused_ordering(109) 00:17:37.508 fused_ordering(110) 00:17:37.508 fused_ordering(111) 00:17:37.508 fused_ordering(112) 00:17:37.508 fused_ordering(113) 00:17:37.508 fused_ordering(114) 00:17:37.508 fused_ordering(115) 00:17:37.508 fused_ordering(116) 00:17:37.508 fused_ordering(117) 00:17:37.508 fused_ordering(118) 00:17:37.508 fused_ordering(119) 00:17:37.508 fused_ordering(120) 00:17:37.508 fused_ordering(121) 00:17:37.508 fused_ordering(122) 00:17:37.508 fused_ordering(123) 00:17:37.508 fused_ordering(124) 00:17:37.508 fused_ordering(125) 00:17:37.508 fused_ordering(126) 00:17:37.508 fused_ordering(127) 00:17:37.508 fused_ordering(128) 00:17:37.508 fused_ordering(129) 00:17:37.508 fused_ordering(130) 00:17:37.508 fused_ordering(131) 00:17:37.508 fused_ordering(132) 00:17:37.508 fused_ordering(133) 00:17:37.508 fused_ordering(134) 00:17:37.508 fused_ordering(135) 00:17:37.508 fused_ordering(136) 00:17:37.508 fused_ordering(137) 00:17:37.508 fused_ordering(138) 00:17:37.508 fused_ordering(139) 00:17:37.508 fused_ordering(140) 00:17:37.508 fused_ordering(141) 00:17:37.508 fused_ordering(142) 00:17:37.508 fused_ordering(143) 00:17:37.508 fused_ordering(144) 00:17:37.508 fused_ordering(145) 00:17:37.508 fused_ordering(146) 00:17:37.508 fused_ordering(147) 00:17:37.508 fused_ordering(148) 00:17:37.508 fused_ordering(149) 00:17:37.508 fused_ordering(150) 00:17:37.508 fused_ordering(151) 00:17:37.508 fused_ordering(152) 00:17:37.508 fused_ordering(153) 00:17:37.508 fused_ordering(154) 00:17:37.508 fused_ordering(155) 00:17:37.508 fused_ordering(156) 00:17:37.508 fused_ordering(157) 00:17:37.508 fused_ordering(158) 00:17:37.508 fused_ordering(159) 00:17:37.508 fused_ordering(160) 00:17:37.508 fused_ordering(161) 00:17:37.508 fused_ordering(162) 00:17:37.508 fused_ordering(163) 00:17:37.508 fused_ordering(164) 00:17:37.508 fused_ordering(165) 00:17:37.508 fused_ordering(166) 00:17:37.508 fused_ordering(167) 00:17:37.508 fused_ordering(168) 00:17:37.508 fused_ordering(169) 00:17:37.508 fused_ordering(170) 00:17:37.508 fused_ordering(171) 00:17:37.508 fused_ordering(172) 00:17:37.508 fused_ordering(173) 00:17:37.508 fused_ordering(174) 00:17:37.508 fused_ordering(175) 00:17:37.508 fused_ordering(176) 00:17:37.508 fused_ordering(177) 00:17:37.508 fused_ordering(178) 00:17:37.508 fused_ordering(179) 00:17:37.508 fused_ordering(180) 00:17:37.508 fused_ordering(181) 00:17:37.508 fused_ordering(182) 00:17:37.508 fused_ordering(183) 00:17:37.508 fused_ordering(184) 00:17:37.508 fused_ordering(185) 00:17:37.508 fused_ordering(186) 00:17:37.508 fused_ordering(187) 00:17:37.508 fused_ordering(188) 00:17:37.508 fused_ordering(189) 00:17:37.508 fused_ordering(190) 00:17:37.508 fused_ordering(191) 00:17:37.508 fused_ordering(192) 00:17:37.508 fused_ordering(193) 00:17:37.508 fused_ordering(194) 00:17:37.508 fused_ordering(195) 00:17:37.508 fused_ordering(196) 00:17:37.508 fused_ordering(197) 00:17:37.508 fused_ordering(198) 00:17:37.508 fused_ordering(199) 00:17:37.508 fused_ordering(200) 00:17:37.508 fused_ordering(201) 00:17:37.508 fused_ordering(202) 00:17:37.508 fused_ordering(203) 00:17:37.508 fused_ordering(204) 00:17:37.508 fused_ordering(205) 00:17:37.770 fused_ordering(206) 00:17:37.770 fused_ordering(207) 00:17:37.770 fused_ordering(208) 00:17:37.770 fused_ordering(209) 00:17:37.770 fused_ordering(210) 00:17:37.770 fused_ordering(211) 00:17:37.770 fused_ordering(212) 00:17:37.770 fused_ordering(213) 00:17:37.770 fused_ordering(214) 00:17:37.770 fused_ordering(215) 00:17:37.770 fused_ordering(216) 00:17:37.770 fused_ordering(217) 00:17:37.770 fused_ordering(218) 00:17:37.770 fused_ordering(219) 00:17:37.770 fused_ordering(220) 00:17:37.770 fused_ordering(221) 00:17:37.770 fused_ordering(222) 00:17:37.770 fused_ordering(223) 00:17:37.770 fused_ordering(224) 00:17:37.770 fused_ordering(225) 00:17:37.770 fused_ordering(226) 00:17:37.770 fused_ordering(227) 00:17:37.770 fused_ordering(228) 00:17:37.770 fused_ordering(229) 00:17:37.770 fused_ordering(230) 00:17:37.770 fused_ordering(231) 00:17:37.770 fused_ordering(232) 00:17:37.770 fused_ordering(233) 00:17:37.770 fused_ordering(234) 00:17:37.770 fused_ordering(235) 00:17:37.770 fused_ordering(236) 00:17:37.770 fused_ordering(237) 00:17:37.770 fused_ordering(238) 00:17:37.770 fused_ordering(239) 00:17:37.770 fused_ordering(240) 00:17:37.770 fused_ordering(241) 00:17:37.770 fused_ordering(242) 00:17:37.770 fused_ordering(243) 00:17:37.770 fused_ordering(244) 00:17:37.770 fused_ordering(245) 00:17:37.770 fused_ordering(246) 00:17:37.770 fused_ordering(247) 00:17:37.770 fused_ordering(248) 00:17:37.770 fused_ordering(249) 00:17:37.770 fused_ordering(250) 00:17:37.770 fused_ordering(251) 00:17:37.770 fused_ordering(252) 00:17:37.770 fused_ordering(253) 00:17:37.770 fused_ordering(254) 00:17:37.770 fused_ordering(255) 00:17:37.770 fused_ordering(256) 00:17:37.770 fused_ordering(257) 00:17:37.770 fused_ordering(258) 00:17:37.770 fused_ordering(259) 00:17:37.770 fused_ordering(260) 00:17:37.770 fused_ordering(261) 00:17:37.770 fused_ordering(262) 00:17:37.770 fused_ordering(263) 00:17:37.770 fused_ordering(264) 00:17:37.770 fused_ordering(265) 00:17:37.770 fused_ordering(266) 00:17:37.770 fused_ordering(267) 00:17:37.770 fused_ordering(268) 00:17:37.770 fused_ordering(269) 00:17:37.770 fused_ordering(270) 00:17:37.770 fused_ordering(271) 00:17:37.770 fused_ordering(272) 00:17:37.770 fused_ordering(273) 00:17:37.770 fused_ordering(274) 00:17:37.770 fused_ordering(275) 00:17:37.770 fused_ordering(276) 00:17:37.770 fused_ordering(277) 00:17:37.770 fused_ordering(278) 00:17:37.770 fused_ordering(279) 00:17:37.770 fused_ordering(280) 00:17:37.770 fused_ordering(281) 00:17:37.770 fused_ordering(282) 00:17:37.770 fused_ordering(283) 00:17:37.770 fused_ordering(284) 00:17:37.770 fused_ordering(285) 00:17:37.770 fused_ordering(286) 00:17:37.770 fused_ordering(287) 00:17:37.770 fused_ordering(288) 00:17:37.770 fused_ordering(289) 00:17:37.770 fused_ordering(290) 00:17:37.770 fused_ordering(291) 00:17:37.770 fused_ordering(292) 00:17:37.770 fused_ordering(293) 00:17:37.770 fused_ordering(294) 00:17:37.770 fused_ordering(295) 00:17:37.770 fused_ordering(296) 00:17:37.770 fused_ordering(297) 00:17:37.770 fused_ordering(298) 00:17:37.770 fused_ordering(299) 00:17:37.770 fused_ordering(300) 00:17:37.770 fused_ordering(301) 00:17:37.770 fused_ordering(302) 00:17:37.770 fused_ordering(303) 00:17:37.770 fused_ordering(304) 00:17:37.770 fused_ordering(305) 00:17:37.770 fused_ordering(306) 00:17:37.770 fused_ordering(307) 00:17:37.770 fused_ordering(308) 00:17:37.770 fused_ordering(309) 00:17:37.770 fused_ordering(310) 00:17:37.770 fused_ordering(311) 00:17:37.770 fused_ordering(312) 00:17:37.770 fused_ordering(313) 00:17:37.770 fused_ordering(314) 00:17:37.770 fused_ordering(315) 00:17:37.770 fused_ordering(316) 00:17:37.770 fused_ordering(317) 00:17:37.770 fused_ordering(318) 00:17:37.770 fused_ordering(319) 00:17:37.770 fused_ordering(320) 00:17:37.770 fused_ordering(321) 00:17:37.770 fused_ordering(322) 00:17:37.770 fused_ordering(323) 00:17:37.770 fused_ordering(324) 00:17:37.770 fused_ordering(325) 00:17:37.770 fused_ordering(326) 00:17:37.770 fused_ordering(327) 00:17:37.770 fused_ordering(328) 00:17:37.770 fused_ordering(329) 00:17:37.770 fused_ordering(330) 00:17:37.770 fused_ordering(331) 00:17:37.770 fused_ordering(332) 00:17:37.770 fused_ordering(333) 00:17:37.770 fused_ordering(334) 00:17:37.770 fused_ordering(335) 00:17:37.770 fused_ordering(336) 00:17:37.770 fused_ordering(337) 00:17:37.770 fused_ordering(338) 00:17:37.770 fused_ordering(339) 00:17:37.770 fused_ordering(340) 00:17:37.770 fused_ordering(341) 00:17:37.770 fused_ordering(342) 00:17:37.770 fused_ordering(343) 00:17:37.770 fused_ordering(344) 00:17:37.770 fused_ordering(345) 00:17:37.770 fused_ordering(346) 00:17:37.770 fused_ordering(347) 00:17:37.770 fused_ordering(348) 00:17:37.770 fused_ordering(349) 00:17:37.770 fused_ordering(350) 00:17:37.770 fused_ordering(351) 00:17:37.770 fused_ordering(352) 00:17:37.770 fused_ordering(353) 00:17:37.770 fused_ordering(354) 00:17:37.770 fused_ordering(355) 00:17:37.770 fused_ordering(356) 00:17:37.770 fused_ordering(357) 00:17:37.770 fused_ordering(358) 00:17:37.770 fused_ordering(359) 00:17:37.770 fused_ordering(360) 00:17:37.770 fused_ordering(361) 00:17:37.770 fused_ordering(362) 00:17:37.770 fused_ordering(363) 00:17:37.770 fused_ordering(364) 00:17:37.770 fused_ordering(365) 00:17:37.770 fused_ordering(366) 00:17:37.770 fused_ordering(367) 00:17:37.770 fused_ordering(368) 00:17:37.770 fused_ordering(369) 00:17:37.770 fused_ordering(370) 00:17:37.770 fused_ordering(371) 00:17:37.770 fused_ordering(372) 00:17:37.770 fused_ordering(373) 00:17:37.770 fused_ordering(374) 00:17:37.770 fused_ordering(375) 00:17:37.770 fused_ordering(376) 00:17:37.770 fused_ordering(377) 00:17:37.770 fused_ordering(378) 00:17:37.770 fused_ordering(379) 00:17:37.770 fused_ordering(380) 00:17:37.770 fused_ordering(381) 00:17:37.770 fused_ordering(382) 00:17:37.770 fused_ordering(383) 00:17:37.770 fused_ordering(384) 00:17:37.770 fused_ordering(385) 00:17:37.770 fused_ordering(386) 00:17:37.770 fused_ordering(387) 00:17:37.770 fused_ordering(388) 00:17:37.770 fused_ordering(389) 00:17:37.770 fused_ordering(390) 00:17:37.770 fused_ordering(391) 00:17:37.770 fused_ordering(392) 00:17:37.770 fused_ordering(393) 00:17:37.770 fused_ordering(394) 00:17:37.770 fused_ordering(395) 00:17:37.770 fused_ordering(396) 00:17:37.770 fused_ordering(397) 00:17:37.770 fused_ordering(398) 00:17:37.770 fused_ordering(399) 00:17:37.770 fused_ordering(400) 00:17:37.770 fused_ordering(401) 00:17:37.770 fused_ordering(402) 00:17:37.770 fused_ordering(403) 00:17:37.770 fused_ordering(404) 00:17:37.770 fused_ordering(405) 00:17:37.770 fused_ordering(406) 00:17:37.770 fused_ordering(407) 00:17:37.770 fused_ordering(408) 00:17:37.770 fused_ordering(409) 00:17:37.770 fused_ordering(410) 00:17:38.031 fused_ordering(411) 00:17:38.031 fused_ordering(412) 00:17:38.031 fused_ordering(413) 00:17:38.031 fused_ordering(414) 00:17:38.031 fused_ordering(415) 00:17:38.031 fused_ordering(416) 00:17:38.031 fused_ordering(417) 00:17:38.031 fused_ordering(418) 00:17:38.031 fused_ordering(419) 00:17:38.031 fused_ordering(420) 00:17:38.031 fused_ordering(421) 00:17:38.031 fused_ordering(422) 00:17:38.031 fused_ordering(423) 00:17:38.031 fused_ordering(424) 00:17:38.031 fused_ordering(425) 00:17:38.031 fused_ordering(426) 00:17:38.031 fused_ordering(427) 00:17:38.031 fused_ordering(428) 00:17:38.031 fused_ordering(429) 00:17:38.031 fused_ordering(430) 00:17:38.031 fused_ordering(431) 00:17:38.031 fused_ordering(432) 00:17:38.031 fused_ordering(433) 00:17:38.031 fused_ordering(434) 00:17:38.031 fused_ordering(435) 00:17:38.031 fused_ordering(436) 00:17:38.031 fused_ordering(437) 00:17:38.031 fused_ordering(438) 00:17:38.031 fused_ordering(439) 00:17:38.031 fused_ordering(440) 00:17:38.031 fused_ordering(441) 00:17:38.031 fused_ordering(442) 00:17:38.031 fused_ordering(443) 00:17:38.031 fused_ordering(444) 00:17:38.031 fused_ordering(445) 00:17:38.031 fused_ordering(446) 00:17:38.031 fused_ordering(447) 00:17:38.031 fused_ordering(448) 00:17:38.031 fused_ordering(449) 00:17:38.031 fused_ordering(450) 00:17:38.031 fused_ordering(451) 00:17:38.031 fused_ordering(452) 00:17:38.031 fused_ordering(453) 00:17:38.031 fused_ordering(454) 00:17:38.031 fused_ordering(455) 00:17:38.031 fused_ordering(456) 00:17:38.031 fused_ordering(457) 00:17:38.031 fused_ordering(458) 00:17:38.031 fused_ordering(459) 00:17:38.031 fused_ordering(460) 00:17:38.031 fused_ordering(461) 00:17:38.031 fused_ordering(462) 00:17:38.031 fused_ordering(463) 00:17:38.031 fused_ordering(464) 00:17:38.031 fused_ordering(465) 00:17:38.031 fused_ordering(466) 00:17:38.031 fused_ordering(467) 00:17:38.031 fused_ordering(468) 00:17:38.031 fused_ordering(469) 00:17:38.031 fused_ordering(470) 00:17:38.031 fused_ordering(471) 00:17:38.031 fused_ordering(472) 00:17:38.031 fused_ordering(473) 00:17:38.031 fused_ordering(474) 00:17:38.031 fused_ordering(475) 00:17:38.032 fused_ordering(476) 00:17:38.032 fused_ordering(477) 00:17:38.032 fused_ordering(478) 00:17:38.032 fused_ordering(479) 00:17:38.032 fused_ordering(480) 00:17:38.032 fused_ordering(481) 00:17:38.032 fused_ordering(482) 00:17:38.032 fused_ordering(483) 00:17:38.032 fused_ordering(484) 00:17:38.032 fused_ordering(485) 00:17:38.032 fused_ordering(486) 00:17:38.032 fused_ordering(487) 00:17:38.032 fused_ordering(488) 00:17:38.032 fused_ordering(489) 00:17:38.032 fused_ordering(490) 00:17:38.032 fused_ordering(491) 00:17:38.032 fused_ordering(492) 00:17:38.032 fused_ordering(493) 00:17:38.032 fused_ordering(494) 00:17:38.032 fused_ordering(495) 00:17:38.032 fused_ordering(496) 00:17:38.032 fused_ordering(497) 00:17:38.032 fused_ordering(498) 00:17:38.032 fused_ordering(499) 00:17:38.032 fused_ordering(500) 00:17:38.032 fused_ordering(501) 00:17:38.032 fused_ordering(502) 00:17:38.032 fused_ordering(503) 00:17:38.032 fused_ordering(504) 00:17:38.032 fused_ordering(505) 00:17:38.032 fused_ordering(506) 00:17:38.032 fused_ordering(507) 00:17:38.032 fused_ordering(508) 00:17:38.032 fused_ordering(509) 00:17:38.032 fused_ordering(510) 00:17:38.032 fused_ordering(511) 00:17:38.032 fused_ordering(512) 00:17:38.032 fused_ordering(513) 00:17:38.032 fused_ordering(514) 00:17:38.032 fused_ordering(515) 00:17:38.032 fused_ordering(516) 00:17:38.032 fused_ordering(517) 00:17:38.032 fused_ordering(518) 00:17:38.032 fused_ordering(519) 00:17:38.032 fused_ordering(520) 00:17:38.032 fused_ordering(521) 00:17:38.032 fused_ordering(522) 00:17:38.032 fused_ordering(523) 00:17:38.032 fused_ordering(524) 00:17:38.032 fused_ordering(525) 00:17:38.032 fused_ordering(526) 00:17:38.032 fused_ordering(527) 00:17:38.032 fused_ordering(528) 00:17:38.032 fused_ordering(529) 00:17:38.032 fused_ordering(530) 00:17:38.032 fused_ordering(531) 00:17:38.032 fused_ordering(532) 00:17:38.032 fused_ordering(533) 00:17:38.032 fused_ordering(534) 00:17:38.032 fused_ordering(535) 00:17:38.032 fused_ordering(536) 00:17:38.032 fused_ordering(537) 00:17:38.032 fused_ordering(538) 00:17:38.032 fused_ordering(539) 00:17:38.032 fused_ordering(540) 00:17:38.032 fused_ordering(541) 00:17:38.032 fused_ordering(542) 00:17:38.032 fused_ordering(543) 00:17:38.032 fused_ordering(544) 00:17:38.032 fused_ordering(545) 00:17:38.032 fused_ordering(546) 00:17:38.032 fused_ordering(547) 00:17:38.032 fused_ordering(548) 00:17:38.032 fused_ordering(549) 00:17:38.032 fused_ordering(550) 00:17:38.032 fused_ordering(551) 00:17:38.032 fused_ordering(552) 00:17:38.032 fused_ordering(553) 00:17:38.032 fused_ordering(554) 00:17:38.032 fused_ordering(555) 00:17:38.032 fused_ordering(556) 00:17:38.032 fused_ordering(557) 00:17:38.032 fused_ordering(558) 00:17:38.032 fused_ordering(559) 00:17:38.032 fused_ordering(560) 00:17:38.032 fused_ordering(561) 00:17:38.032 fused_ordering(562) 00:17:38.032 fused_ordering(563) 00:17:38.032 fused_ordering(564) 00:17:38.032 fused_ordering(565) 00:17:38.032 fused_ordering(566) 00:17:38.032 fused_ordering(567) 00:17:38.032 fused_ordering(568) 00:17:38.032 fused_ordering(569) 00:17:38.032 fused_ordering(570) 00:17:38.032 fused_ordering(571) 00:17:38.032 fused_ordering(572) 00:17:38.032 fused_ordering(573) 00:17:38.032 fused_ordering(574) 00:17:38.032 fused_ordering(575) 00:17:38.032 fused_ordering(576) 00:17:38.032 fused_ordering(577) 00:17:38.032 fused_ordering(578) 00:17:38.032 fused_ordering(579) 00:17:38.032 fused_ordering(580) 00:17:38.032 fused_ordering(581) 00:17:38.032 fused_ordering(582) 00:17:38.032 fused_ordering(583) 00:17:38.032 fused_ordering(584) 00:17:38.032 fused_ordering(585) 00:17:38.032 fused_ordering(586) 00:17:38.032 fused_ordering(587) 00:17:38.032 fused_ordering(588) 00:17:38.032 fused_ordering(589) 00:17:38.032 fused_ordering(590) 00:17:38.032 fused_ordering(591) 00:17:38.032 fused_ordering(592) 00:17:38.032 fused_ordering(593) 00:17:38.032 fused_ordering(594) 00:17:38.032 fused_ordering(595) 00:17:38.032 fused_ordering(596) 00:17:38.032 fused_ordering(597) 00:17:38.032 fused_ordering(598) 00:17:38.032 fused_ordering(599) 00:17:38.032 fused_ordering(600) 00:17:38.032 fused_ordering(601) 00:17:38.032 fused_ordering(602) 00:17:38.032 fused_ordering(603) 00:17:38.032 fused_ordering(604) 00:17:38.032 fused_ordering(605) 00:17:38.032 fused_ordering(606) 00:17:38.032 fused_ordering(607) 00:17:38.032 fused_ordering(608) 00:17:38.032 fused_ordering(609) 00:17:38.032 fused_ordering(610) 00:17:38.032 fused_ordering(611) 00:17:38.032 fused_ordering(612) 00:17:38.032 fused_ordering(613) 00:17:38.032 fused_ordering(614) 00:17:38.032 fused_ordering(615) 00:17:38.605 fused_ordering(616) 00:17:38.605 fused_ordering(617) 00:17:38.605 fused_ordering(618) 00:17:38.605 fused_ordering(619) 00:17:38.605 fused_ordering(620) 00:17:38.605 fused_ordering(621) 00:17:38.605 fused_ordering(622) 00:17:38.605 fused_ordering(623) 00:17:38.605 fused_ordering(624) 00:17:38.605 fused_ordering(625) 00:17:38.605 fused_ordering(626) 00:17:38.605 fused_ordering(627) 00:17:38.605 fused_ordering(628) 00:17:38.605 fused_ordering(629) 00:17:38.605 fused_ordering(630) 00:17:38.605 fused_ordering(631) 00:17:38.605 fused_ordering(632) 00:17:38.605 fused_ordering(633) 00:17:38.605 fused_ordering(634) 00:17:38.605 fused_ordering(635) 00:17:38.605 fused_ordering(636) 00:17:38.605 fused_ordering(637) 00:17:38.605 fused_ordering(638) 00:17:38.605 fused_ordering(639) 00:17:38.605 fused_ordering(640) 00:17:38.605 fused_ordering(641) 00:17:38.605 fused_ordering(642) 00:17:38.605 fused_ordering(643) 00:17:38.605 fused_ordering(644) 00:17:38.605 fused_ordering(645) 00:17:38.605 fused_ordering(646) 00:17:38.605 fused_ordering(647) 00:17:38.605 fused_ordering(648) 00:17:38.605 fused_ordering(649) 00:17:38.605 fused_ordering(650) 00:17:38.605 fused_ordering(651) 00:17:38.605 fused_ordering(652) 00:17:38.605 fused_ordering(653) 00:17:38.605 fused_ordering(654) 00:17:38.605 fused_ordering(655) 00:17:38.605 fused_ordering(656) 00:17:38.605 fused_ordering(657) 00:17:38.605 fused_ordering(658) 00:17:38.605 fused_ordering(659) 00:17:38.605 fused_ordering(660) 00:17:38.605 fused_ordering(661) 00:17:38.605 fused_ordering(662) 00:17:38.605 fused_ordering(663) 00:17:38.605 fused_ordering(664) 00:17:38.605 fused_ordering(665) 00:17:38.605 fused_ordering(666) 00:17:38.605 fused_ordering(667) 00:17:38.605 fused_ordering(668) 00:17:38.605 fused_ordering(669) 00:17:38.605 fused_ordering(670) 00:17:38.605 fused_ordering(671) 00:17:38.605 fused_ordering(672) 00:17:38.605 fused_ordering(673) 00:17:38.605 fused_ordering(674) 00:17:38.605 fused_ordering(675) 00:17:38.605 fused_ordering(676) 00:17:38.605 fused_ordering(677) 00:17:38.605 fused_ordering(678) 00:17:38.605 fused_ordering(679) 00:17:38.605 fused_ordering(680) 00:17:38.605 fused_ordering(681) 00:17:38.605 fused_ordering(682) 00:17:38.605 fused_ordering(683) 00:17:38.605 fused_ordering(684) 00:17:38.605 fused_ordering(685) 00:17:38.605 fused_ordering(686) 00:17:38.605 fused_ordering(687) 00:17:38.605 fused_ordering(688) 00:17:38.605 fused_ordering(689) 00:17:38.605 fused_ordering(690) 00:17:38.605 fused_ordering(691) 00:17:38.605 fused_ordering(692) 00:17:38.605 fused_ordering(693) 00:17:38.605 fused_ordering(694) 00:17:38.605 fused_ordering(695) 00:17:38.605 fused_ordering(696) 00:17:38.605 fused_ordering(697) 00:17:38.605 fused_ordering(698) 00:17:38.605 fused_ordering(699) 00:17:38.605 fused_ordering(700) 00:17:38.605 fused_ordering(701) 00:17:38.605 fused_ordering(702) 00:17:38.605 fused_ordering(703) 00:17:38.605 fused_ordering(704) 00:17:38.605 fused_ordering(705) 00:17:38.605 fused_ordering(706) 00:17:38.605 fused_ordering(707) 00:17:38.605 fused_ordering(708) 00:17:38.605 fused_ordering(709) 00:17:38.605 fused_ordering(710) 00:17:38.605 fused_ordering(711) 00:17:38.605 fused_ordering(712) 00:17:38.605 fused_ordering(713) 00:17:38.605 fused_ordering(714) 00:17:38.605 fused_ordering(715) 00:17:38.605 fused_ordering(716) 00:17:38.605 fused_ordering(717) 00:17:38.605 fused_ordering(718) 00:17:38.605 fused_ordering(719) 00:17:38.605 fused_ordering(720) 00:17:38.605 fused_ordering(721) 00:17:38.605 fused_ordering(722) 00:17:38.605 fused_ordering(723) 00:17:38.605 fused_ordering(724) 00:17:38.605 fused_ordering(725) 00:17:38.605 fused_ordering(726) 00:17:38.605 fused_ordering(727) 00:17:38.605 fused_ordering(728) 00:17:38.605 fused_ordering(729) 00:17:38.605 fused_ordering(730) 00:17:38.605 fused_ordering(731) 00:17:38.605 fused_ordering(732) 00:17:38.605 fused_ordering(733) 00:17:38.605 fused_ordering(734) 00:17:38.605 fused_ordering(735) 00:17:38.605 fused_ordering(736) 00:17:38.605 fused_ordering(737) 00:17:38.605 fused_ordering(738) 00:17:38.605 fused_ordering(739) 00:17:38.605 fused_ordering(740) 00:17:38.605 fused_ordering(741) 00:17:38.605 fused_ordering(742) 00:17:38.605 fused_ordering(743) 00:17:38.605 fused_ordering(744) 00:17:38.605 fused_ordering(745) 00:17:38.605 fused_ordering(746) 00:17:38.605 fused_ordering(747) 00:17:38.605 fused_ordering(748) 00:17:38.605 fused_ordering(749) 00:17:38.605 fused_ordering(750) 00:17:38.605 fused_ordering(751) 00:17:38.605 fused_ordering(752) 00:17:38.605 fused_ordering(753) 00:17:38.605 fused_ordering(754) 00:17:38.605 fused_ordering(755) 00:17:38.605 fused_ordering(756) 00:17:38.605 fused_ordering(757) 00:17:38.605 fused_ordering(758) 00:17:38.605 fused_ordering(759) 00:17:38.605 fused_ordering(760) 00:17:38.605 fused_ordering(761) 00:17:38.605 fused_ordering(762) 00:17:38.605 fused_ordering(763) 00:17:38.605 fused_ordering(764) 00:17:38.605 fused_ordering(765) 00:17:38.605 fused_ordering(766) 00:17:38.605 fused_ordering(767) 00:17:38.605 fused_ordering(768) 00:17:38.605 fused_ordering(769) 00:17:38.605 fused_ordering(770) 00:17:38.605 fused_ordering(771) 00:17:38.605 fused_ordering(772) 00:17:38.605 fused_ordering(773) 00:17:38.605 fused_ordering(774) 00:17:38.605 fused_ordering(775) 00:17:38.605 fused_ordering(776) 00:17:38.605 fused_ordering(777) 00:17:38.605 fused_ordering(778) 00:17:38.605 fused_ordering(779) 00:17:38.605 fused_ordering(780) 00:17:38.605 fused_ordering(781) 00:17:38.605 fused_ordering(782) 00:17:38.605 fused_ordering(783) 00:17:38.605 fused_ordering(784) 00:17:38.605 fused_ordering(785) 00:17:38.605 fused_ordering(786) 00:17:38.605 fused_ordering(787) 00:17:38.605 fused_ordering(788) 00:17:38.605 fused_ordering(789) 00:17:38.605 fused_ordering(790) 00:17:38.605 fused_ordering(791) 00:17:38.605 fused_ordering(792) 00:17:38.605 fused_ordering(793) 00:17:38.605 fused_ordering(794) 00:17:38.605 fused_ordering(795) 00:17:38.605 fused_ordering(796) 00:17:38.605 fused_ordering(797) 00:17:38.605 fused_ordering(798) 00:17:38.605 fused_ordering(799) 00:17:38.605 fused_ordering(800) 00:17:38.605 fused_ordering(801) 00:17:38.605 fused_ordering(802) 00:17:38.605 fused_ordering(803) 00:17:38.605 fused_ordering(804) 00:17:38.605 fused_ordering(805) 00:17:38.605 fused_ordering(806) 00:17:38.605 fused_ordering(807) 00:17:38.605 fused_ordering(808) 00:17:38.605 fused_ordering(809) 00:17:38.605 fused_ordering(810) 00:17:38.605 fused_ordering(811) 00:17:38.605 fused_ordering(812) 00:17:38.605 fused_ordering(813) 00:17:38.605 fused_ordering(814) 00:17:38.605 fused_ordering(815) 00:17:38.605 fused_ordering(816) 00:17:38.605 fused_ordering(817) 00:17:38.605 fused_ordering(818) 00:17:38.605 fused_ordering(819) 00:17:38.605 fused_ordering(820) 00:17:39.176 fused_ordering(821) 00:17:39.176 fused_ordering(822) 00:17:39.176 fused_ordering(823) 00:17:39.176 fused_ordering(824) 00:17:39.176 fused_ordering(825) 00:17:39.176 fused_ordering(826) 00:17:39.176 fused_ordering(827) 00:17:39.176 fused_ordering(828) 00:17:39.176 fused_ordering(829) 00:17:39.176 fused_ordering(830) 00:17:39.176 fused_ordering(831) 00:17:39.176 fused_ordering(832) 00:17:39.176 fused_ordering(833) 00:17:39.176 fused_ordering(834) 00:17:39.176 fused_ordering(835) 00:17:39.176 fused_ordering(836) 00:17:39.176 fused_ordering(837) 00:17:39.176 fused_ordering(838) 00:17:39.176 fused_ordering(839) 00:17:39.176 fused_ordering(840) 00:17:39.176 fused_ordering(841) 00:17:39.176 fused_ordering(842) 00:17:39.176 fused_ordering(843) 00:17:39.176 fused_ordering(844) 00:17:39.176 fused_ordering(845) 00:17:39.176 fused_ordering(846) 00:17:39.176 fused_ordering(847) 00:17:39.176 fused_ordering(848) 00:17:39.176 fused_ordering(849) 00:17:39.176 fused_ordering(850) 00:17:39.176 fused_ordering(851) 00:17:39.176 fused_ordering(852) 00:17:39.176 fused_ordering(853) 00:17:39.176 fused_ordering(854) 00:17:39.176 fused_ordering(855) 00:17:39.176 fused_ordering(856) 00:17:39.176 fused_ordering(857) 00:17:39.176 fused_ordering(858) 00:17:39.176 fused_ordering(859) 00:17:39.176 fused_ordering(860) 00:17:39.176 fused_ordering(861) 00:17:39.176 fused_ordering(862) 00:17:39.176 fused_ordering(863) 00:17:39.176 fused_ordering(864) 00:17:39.176 fused_ordering(865) 00:17:39.176 fused_ordering(866) 00:17:39.176 fused_ordering(867) 00:17:39.176 fused_ordering(868) 00:17:39.176 fused_ordering(869) 00:17:39.176 fused_ordering(870) 00:17:39.176 fused_ordering(871) 00:17:39.176 fused_ordering(872) 00:17:39.176 fused_ordering(873) 00:17:39.176 fused_ordering(874) 00:17:39.176 fused_ordering(875) 00:17:39.176 fused_ordering(876) 00:17:39.176 fused_ordering(877) 00:17:39.176 fused_ordering(878) 00:17:39.176 fused_ordering(879) 00:17:39.176 fused_ordering(880) 00:17:39.176 fused_ordering(881) 00:17:39.176 fused_ordering(882) 00:17:39.176 fused_ordering(883) 00:17:39.176 fused_ordering(884) 00:17:39.176 fused_ordering(885) 00:17:39.176 fused_ordering(886) 00:17:39.176 fused_ordering(887) 00:17:39.176 fused_ordering(888) 00:17:39.176 fused_ordering(889) 00:17:39.176 fused_ordering(890) 00:17:39.176 fused_ordering(891) 00:17:39.176 fused_ordering(892) 00:17:39.176 fused_ordering(893) 00:17:39.176 fused_ordering(894) 00:17:39.176 fused_ordering(895) 00:17:39.176 fused_ordering(896) 00:17:39.176 fused_ordering(897) 00:17:39.176 fused_ordering(898) 00:17:39.176 fused_ordering(899) 00:17:39.176 fused_ordering(900) 00:17:39.176 fused_ordering(901) 00:17:39.176 fused_ordering(902) 00:17:39.176 fused_ordering(903) 00:17:39.176 fused_ordering(904) 00:17:39.176 fused_ordering(905) 00:17:39.176 fused_ordering(906) 00:17:39.176 fused_ordering(907) 00:17:39.176 fused_ordering(908) 00:17:39.177 fused_ordering(909) 00:17:39.177 fused_ordering(910) 00:17:39.177 fused_ordering(911) 00:17:39.177 fused_ordering(912) 00:17:39.177 fused_ordering(913) 00:17:39.177 fused_ordering(914) 00:17:39.177 fused_ordering(915) 00:17:39.177 fused_ordering(916) 00:17:39.177 fused_ordering(917) 00:17:39.177 fused_ordering(918) 00:17:39.177 fused_ordering(919) 00:17:39.177 fused_ordering(920) 00:17:39.177 fused_ordering(921) 00:17:39.177 fused_ordering(922) 00:17:39.177 fused_ordering(923) 00:17:39.177 fused_ordering(924) 00:17:39.177 fused_ordering(925) 00:17:39.177 fused_ordering(926) 00:17:39.177 fused_ordering(927) 00:17:39.177 fused_ordering(928) 00:17:39.177 fused_ordering(929) 00:17:39.177 fused_ordering(930) 00:17:39.177 fused_ordering(931) 00:17:39.177 fused_ordering(932) 00:17:39.177 fused_ordering(933) 00:17:39.177 fused_ordering(934) 00:17:39.177 fused_ordering(935) 00:17:39.177 fused_ordering(936) 00:17:39.177 fused_ordering(937) 00:17:39.177 fused_ordering(938) 00:17:39.177 fused_ordering(939) 00:17:39.177 fused_ordering(940) 00:17:39.177 fused_ordering(941) 00:17:39.177 fused_ordering(942) 00:17:39.177 fused_ordering(943) 00:17:39.177 fused_ordering(944) 00:17:39.177 fused_ordering(945) 00:17:39.177 fused_ordering(946) 00:17:39.177 fused_ordering(947) 00:17:39.177 fused_ordering(948) 00:17:39.177 fused_ordering(949) 00:17:39.177 fused_ordering(950) 00:17:39.177 fused_ordering(951) 00:17:39.177 fused_ordering(952) 00:17:39.177 fused_ordering(953) 00:17:39.177 fused_ordering(954) 00:17:39.177 fused_ordering(955) 00:17:39.177 fused_ordering(956) 00:17:39.177 fused_ordering(957) 00:17:39.177 fused_ordering(958) 00:17:39.177 fused_ordering(959) 00:17:39.177 fused_ordering(960) 00:17:39.177 fused_ordering(961) 00:17:39.177 fused_ordering(962) 00:17:39.177 fused_ordering(963) 00:17:39.177 fused_ordering(964) 00:17:39.177 fused_ordering(965) 00:17:39.177 fused_ordering(966) 00:17:39.177 fused_ordering(967) 00:17:39.177 fused_ordering(968) 00:17:39.177 fused_ordering(969) 00:17:39.177 fused_ordering(970) 00:17:39.177 fused_ordering(971) 00:17:39.177 fused_ordering(972) 00:17:39.177 fused_ordering(973) 00:17:39.177 fused_ordering(974) 00:17:39.177 fused_ordering(975) 00:17:39.177 fused_ordering(976) 00:17:39.177 fused_ordering(977) 00:17:39.177 fused_ordering(978) 00:17:39.177 fused_ordering(979) 00:17:39.177 fused_ordering(980) 00:17:39.177 fused_ordering(981) 00:17:39.177 fused_ordering(982) 00:17:39.177 fused_ordering(983) 00:17:39.177 fused_ordering(984) 00:17:39.177 fused_ordering(985) 00:17:39.177 fused_ordering(986) 00:17:39.177 fused_ordering(987) 00:17:39.177 fused_ordering(988) 00:17:39.177 fused_ordering(989) 00:17:39.177 fused_ordering(990) 00:17:39.177 fused_ordering(991) 00:17:39.177 fused_ordering(992) 00:17:39.177 fused_ordering(993) 00:17:39.177 fused_ordering(994) 00:17:39.177 fused_ordering(995) 00:17:39.177 fused_ordering(996) 00:17:39.177 fused_ordering(997) 00:17:39.177 fused_ordering(998) 00:17:39.177 fused_ordering(999) 00:17:39.177 fused_ordering(1000) 00:17:39.177 fused_ordering(1001) 00:17:39.177 fused_ordering(1002) 00:17:39.177 fused_ordering(1003) 00:17:39.177 fused_ordering(1004) 00:17:39.177 fused_ordering(1005) 00:17:39.177 fused_ordering(1006) 00:17:39.177 fused_ordering(1007) 00:17:39.177 fused_ordering(1008) 00:17:39.177 fused_ordering(1009) 00:17:39.177 fused_ordering(1010) 00:17:39.177 fused_ordering(1011) 00:17:39.177 fused_ordering(1012) 00:17:39.177 fused_ordering(1013) 00:17:39.177 fused_ordering(1014) 00:17:39.177 fused_ordering(1015) 00:17:39.177 fused_ordering(1016) 00:17:39.177 fused_ordering(1017) 00:17:39.177 fused_ordering(1018) 00:17:39.177 fused_ordering(1019) 00:17:39.177 fused_ordering(1020) 00:17:39.177 fused_ordering(1021) 00:17:39.177 fused_ordering(1022) 00:17:39.177 fused_ordering(1023) 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.177 rmmod nvme_tcp 00:17:39.177 rmmod nvme_fabrics 00:17:39.177 rmmod nvme_keyring 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3469978 ']' 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3469978 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3469978 ']' 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3469978 00:17:39.177 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3469978 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3469978' 00:17:39.438 killing process with pid 3469978 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3469978 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3469978 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.438 22:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.984 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.984 00:17:41.984 real 0m13.362s 00:17:41.984 user 0m6.882s 00:17:41.984 sys 0m7.174s 00:17:41.984 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:41.984 22:05:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:41.984 ************************************ 00:17:41.984 END TEST nvmf_fused_ordering 00:17:41.984 ************************************ 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.984 ************************************ 00:17:41.984 START TEST nvmf_ns_masking 00:17:41.984 ************************************ 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:41.984 * Looking for test storage... 00:17:41.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:41.984 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:41.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.985 --rc genhtml_branch_coverage=1 00:17:41.985 --rc genhtml_function_coverage=1 00:17:41.985 --rc genhtml_legend=1 00:17:41.985 --rc geninfo_all_blocks=1 00:17:41.985 --rc geninfo_unexecuted_blocks=1 00:17:41.985 00:17:41.985 ' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:41.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.985 --rc genhtml_branch_coverage=1 00:17:41.985 --rc genhtml_function_coverage=1 00:17:41.985 --rc genhtml_legend=1 00:17:41.985 --rc geninfo_all_blocks=1 00:17:41.985 --rc geninfo_unexecuted_blocks=1 00:17:41.985 00:17:41.985 ' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:41.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.985 --rc genhtml_branch_coverage=1 00:17:41.985 --rc genhtml_function_coverage=1 00:17:41.985 --rc genhtml_legend=1 00:17:41.985 --rc geninfo_all_blocks=1 00:17:41.985 --rc geninfo_unexecuted_blocks=1 00:17:41.985 00:17:41.985 ' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:41.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.985 --rc genhtml_branch_coverage=1 00:17:41.985 --rc genhtml_function_coverage=1 00:17:41.985 --rc genhtml_legend=1 00:17:41.985 --rc geninfo_all_blocks=1 00:17:41.985 --rc geninfo_unexecuted_blocks=1 00:17:41.985 00:17:41.985 ' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ed1328d3-ca65-47a7-8d37-f5e09deba2ec 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9ef45af6-88a4-4c4c-a65f-deae2fe8f7c9 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=793a8f33-a923-489b-ac64-7d3c841b9a26 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:17:41.985 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.986 22:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:50.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:50.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:50.125 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:50.125 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:50.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:17:50.125 00:17:50.125 --- 10.0.0.2 ping statistics --- 00:17:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.125 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:17:50.125 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:50.125 00:17:50.126 --- 10.0.0.1 ping statistics --- 00:17:50.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.126 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3474795 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3474795 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3474795 ']' 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.126 22:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.126 [2024-10-12 22:06:07.949349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:50.126 [2024-10-12 22:06:07.949417] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.126 [2024-10-12 22:06:08.039863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.126 [2024-10-12 22:06:08.085830] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.126 [2024-10-12 22:06:08.085883] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.126 [2024-10-12 22:06:08.085891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.126 [2024-10-12 22:06:08.085899] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.126 [2024-10-12 22:06:08.085905] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.126 [2024-10-12 22:06:08.085937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.387 22:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:50.648 [2024-10-12 22:06:08.988985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.648 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:50.648 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:50.648 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:50.908 Malloc1 00:17:50.908 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:51.169 Malloc2 00:17:51.169 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:51.430 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:51.430 22:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.691 [2024-10-12 22:06:10.031299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.691 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:51.691 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 793a8f33-a923-489b-ac64-7d3c841b9a26 -a 10.0.0.2 -s 4420 -i 4 00:17:51.951 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:51.951 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:51.951 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.951 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:51.951 22:06:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:53.864 [ 0]:0x1 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:53.864 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ea618fa901448aa94e49d6f957f0a16 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ea618fa901448aa94e49d6f957f0a16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.125 [ 0]:0x1 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.125 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ea618fa901448aa94e49d6f957f0a16 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ea618fa901448aa94e49d6f957f0a16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.385 [ 1]:0x2 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.385 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.645 22:06:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 793a8f33-a923-489b-ac64-7d3c841b9a26 -a 10.0.0.2 -s 4420 -i 4 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:54.906 22:06:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:57.447 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.448 [ 0]:0x2 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.448 [ 0]:0x1 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ea618fa901448aa94e49d6f957f0a16 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ea618fa901448aa94e49d6f957f0a16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.448 [ 1]:0x2 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.448 22:06:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.708 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.709 [ 0]:0x2 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:57.709 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.969 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.969 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:57.969 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 793a8f33-a923-489b-ac64-7d3c841b9a26 -a 10.0.0.2 -s 4420 -i 4 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:58.230 22:06:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:00.140 [ 0]:0x1 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:00.140 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ea618fa901448aa94e49d6f957f0a16 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ea618fa901448aa94e49d6f957f0a16 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:00.401 [ 1]:0x2 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.401 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:00.662 [ 0]:0x2 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.662 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.662 22:06:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.662 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.662 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:00.662 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:00.923 [2024-10-12 22:06:19.156160] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:00.923 request: 00:18:00.923 { 00:18:00.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.923 "nsid": 2, 00:18:00.923 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.923 "method": "nvmf_ns_remove_host", 00:18:00.923 "req_id": 1 00:18:00.923 } 00:18:00.923 Got JSON-RPC error response 00:18:00.923 response: 00:18:00.923 { 00:18:00.923 "code": -32602, 00:18:00.923 "message": "Invalid parameters" 00:18:00.923 } 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:00.923 [ 0]:0x2 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=98b6bef51341442d8811beff0c3d0ca8 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 98b6bef51341442d8811beff0c3d0ca8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:00.923 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3477742 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3477742 /var/tmp/host.sock 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3477742 ']' 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:01.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.183 22:06:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 [2024-10-12 22:06:19.534232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:01.183 [2024-10-12 22:06:19.534285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477742 ] 00:18:01.183 [2024-10-12 22:06:19.613496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.183 [2024-10-12 22:06:19.644620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.126 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.126 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:02.126 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.126 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ed1328d3-ca65-47a7-8d37-f5e09deba2ec 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED1328D3CA6547A78D37F5E09DEBA2EC -i 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9ef45af6-88a4-4c4c-a65f-deae2fe8f7c9 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:18:02.387 22:06:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9EF45AF688A44C4CA65FDEAE2FE8F7C9 -i 00:18:02.647 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:02.908 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:03.169 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:03.169 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:03.430 nvme0n1 00:18:03.430 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:03.430 22:06:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:03.691 nvme1n2 00:18:03.691 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:03.691 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:03.691 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:03.691 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:03.691 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:03.953 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:03.953 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:03.953 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:03.953 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ed1328d3-ca65-47a7-8d37-f5e09deba2ec == \e\d\1\3\2\8\d\3\-\c\a\6\5\-\4\7\a\7\-\8\d\3\7\-\f\5\e\0\9\d\e\b\a\2\e\c ]] 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9ef45af6-88a4-4c4c-a65f-deae2fe8f7c9 == \9\e\f\4\5\a\f\6\-\8\8\a\4\-\4\c\4\c\-\a\6\5\f\-\d\e\a\e\2\f\e\8\f\7\c\9 ]] 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3477742 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3477742 ']' 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3477742 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3477742 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3477742' 00:18:04.213 killing process with pid 3477742 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3477742 00:18:04.213 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3477742 00:18:04.474 22:06:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.734 rmmod nvme_tcp 00:18:04.734 rmmod nvme_fabrics 00:18:04.734 rmmod nvme_keyring 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3474795 ']' 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3474795 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3474795 ']' 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3474795 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.734 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3474795 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3474795' 00:18:04.995 killing process with pid 3474795 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3474795 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3474795 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:04.995 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.996 22:06:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:07.542 00:18:07.542 real 0m25.389s 00:18:07.542 user 0m25.808s 00:18:07.542 sys 0m7.957s 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.542 ************************************ 00:18:07.542 END TEST nvmf_ns_masking 00:18:07.542 ************************************ 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.542 ************************************ 00:18:07.542 START TEST nvmf_nvme_cli 00:18:07.542 ************************************ 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.542 * Looking for test storage... 00:18:07.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.542 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:07.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.542 --rc genhtml_branch_coverage=1 00:18:07.543 --rc genhtml_function_coverage=1 00:18:07.543 --rc genhtml_legend=1 00:18:07.543 --rc geninfo_all_blocks=1 00:18:07.543 --rc geninfo_unexecuted_blocks=1 00:18:07.543 00:18:07.543 ' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:07.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.543 --rc genhtml_branch_coverage=1 00:18:07.543 --rc genhtml_function_coverage=1 00:18:07.543 --rc genhtml_legend=1 00:18:07.543 --rc geninfo_all_blocks=1 00:18:07.543 --rc geninfo_unexecuted_blocks=1 00:18:07.543 00:18:07.543 ' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:07.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.543 --rc genhtml_branch_coverage=1 00:18:07.543 --rc genhtml_function_coverage=1 00:18:07.543 --rc genhtml_legend=1 00:18:07.543 --rc geninfo_all_blocks=1 00:18:07.543 --rc geninfo_unexecuted_blocks=1 00:18:07.543 00:18:07.543 ' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:07.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.543 --rc genhtml_branch_coverage=1 00:18:07.543 --rc genhtml_function_coverage=1 00:18:07.543 --rc genhtml_legend=1 00:18:07.543 --rc geninfo_all_blocks=1 00:18:07.543 --rc geninfo_unexecuted_blocks=1 00:18:07.543 00:18:07.543 ' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:07.543 22:06:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:15.688 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:15.688 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:15.688 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:15.688 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.688 22:06:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:18:15.689 00:18:15.689 --- 10.0.0.2 ping statistics --- 00:18:15.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.689 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:18:15.689 00:18:15.689 --- 10.0.0.1 ping statistics --- 00:18:15.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.689 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3482770 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3482770 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3482770 ']' 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.689 22:06:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.689 [2024-10-12 22:06:33.377571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:15.689 [2024-10-12 22:06:33.377639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.689 [2024-10-12 22:06:33.455296] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.689 [2024-10-12 22:06:33.503997] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.689 [2024-10-12 22:06:33.504053] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.689 [2024-10-12 22:06:33.504061] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.689 [2024-10-12 22:06:33.504068] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.689 [2024-10-12 22:06:33.504074] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.689 [2024-10-12 22:06:33.504233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.689 [2024-10-12 22:06:33.504485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.689 [2024-10-12 22:06:33.504645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.689 [2024-10-12 22:06:33.504647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 [2024-10-12 22:06:34.241203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 Malloc0 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 Malloc1 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 [2024-10-12 22:06:34.343283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.951 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:18:16.213 00:18:16.213 Discovery Log Number of Records 2, Generation counter 2 00:18:16.213 =====Discovery Log Entry 0====== 00:18:16.213 trtype: tcp 00:18:16.213 adrfam: ipv4 00:18:16.213 subtype: current discovery subsystem 00:18:16.213 treq: not required 00:18:16.213 portid: 0 00:18:16.213 trsvcid: 4420 00:18:16.213 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:16.213 traddr: 10.0.0.2 00:18:16.213 eflags: explicit discovery connections, duplicate discovery information 00:18:16.213 sectype: none 00:18:16.213 =====Discovery Log Entry 1====== 00:18:16.213 trtype: tcp 00:18:16.213 adrfam: ipv4 00:18:16.213 subtype: nvme subsystem 00:18:16.213 treq: not required 00:18:16.213 portid: 0 00:18:16.213 trsvcid: 4420 00:18:16.213 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:16.213 traddr: 10.0.0.2 00:18:16.213 eflags: none 00:18:16.213 sectype: none 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:16.213 22:06:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:17.715 22:06:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:19.629 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:19.890 /dev/nvme0n2 ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.890 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.890 rmmod nvme_tcp 00:18:19.890 rmmod nvme_fabrics 00:18:19.890 rmmod nvme_keyring 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3482770 ']' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3482770 ']' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3482770' 00:18:20.151 killing process with pid 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3482770 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.151 22:06:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.698 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:22.698 00:18:22.698 real 0m15.139s 00:18:22.698 user 0m22.558s 00:18:22.698 sys 0m6.359s 00:18:22.698 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.698 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:22.698 ************************************ 00:18:22.698 END TEST nvmf_nvme_cli 00:18:22.698 ************************************ 00:18:22.698 22:06:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.699 ************************************ 00:18:22.699 START TEST nvmf_vfio_user 00:18:22.699 ************************************ 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:22.699 * Looking for test storage... 00:18:22.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:22.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.699 --rc genhtml_branch_coverage=1 00:18:22.699 --rc genhtml_function_coverage=1 00:18:22.699 --rc genhtml_legend=1 00:18:22.699 --rc geninfo_all_blocks=1 00:18:22.699 --rc geninfo_unexecuted_blocks=1 00:18:22.699 00:18:22.699 ' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:22.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.699 --rc genhtml_branch_coverage=1 00:18:22.699 --rc genhtml_function_coverage=1 00:18:22.699 --rc genhtml_legend=1 00:18:22.699 --rc geninfo_all_blocks=1 00:18:22.699 --rc geninfo_unexecuted_blocks=1 00:18:22.699 00:18:22.699 ' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:22.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.699 --rc genhtml_branch_coverage=1 00:18:22.699 --rc genhtml_function_coverage=1 00:18:22.699 --rc genhtml_legend=1 00:18:22.699 --rc geninfo_all_blocks=1 00:18:22.699 --rc geninfo_unexecuted_blocks=1 00:18:22.699 00:18:22.699 ' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:22.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.699 --rc genhtml_branch_coverage=1 00:18:22.699 --rc genhtml_function_coverage=1 00:18:22.699 --rc genhtml_legend=1 00:18:22.699 --rc geninfo_all_blocks=1 00:18:22.699 --rc geninfo_unexecuted_blocks=1 00:18:22.699 00:18:22.699 ' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.699 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3484260 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3484260' 00:18:22.700 Process pid: 3484260 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3484260 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3484260 ']' 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.700 22:06:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:22.700 [2024-10-12 22:06:41.039408] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:22.700 [2024-10-12 22:06:41.039472] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.700 [2024-10-12 22:06:41.120148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.700 [2024-10-12 22:06:41.154993] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.700 [2024-10-12 22:06:41.155032] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.700 [2024-10-12 22:06:41.155037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.700 [2024-10-12 22:06:41.155043] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.700 [2024-10-12 22:06:41.155047] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.700 [2024-10-12 22:06:41.155142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.700 [2024-10-12 22:06:41.155236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.700 [2024-10-12 22:06:41.155363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.700 [2024-10-12 22:06:41.155364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.657 22:06:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.657 22:06:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:23.657 22:06:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:24.598 22:06:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:24.598 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:24.598 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:24.598 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:24.598 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:24.598 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:24.858 Malloc1 00:18:24.858 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:25.119 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:25.379 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:25.379 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:25.379 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:25.379 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:25.640 Malloc2 00:18:25.640 22:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:25.901 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:25.901 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:26.164 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:26.164 [2024-10-12 22:06:44.565499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:26.164 [2024-10-12 22:06:44.565542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484957 ] 00:18:26.164 [2024-10-12 22:06:44.592133] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:26.164 [2024-10-12 22:06:44.601374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.164 [2024-10-12 22:06:44.601389] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb312ec5000 00:18:26.164 [2024-10-12 22:06:44.602377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.603377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.604372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.605386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.606381] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.607390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.608401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.609403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.164 [2024-10-12 22:06:44.610418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.164 [2024-10-12 22:06:44.610424] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb311bcf000 00:18:26.164 [2024-10-12 22:06:44.611336] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.164 [2024-10-12 22:06:44.620782] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:26.164 [2024-10-12 22:06:44.620806] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:26.164 [2024-10-12 22:06:44.625502] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:26.164 [2024-10-12 22:06:44.625534] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:26.164 [2024-10-12 22:06:44.625598] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:26.164 [2024-10-12 22:06:44.625613] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:26.164 [2024-10-12 22:06:44.625617] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:26.164 [2024-10-12 22:06:44.626499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:26.164 [2024-10-12 22:06:44.626506] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:26.164 [2024-10-12 22:06:44.626511] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:26.164 [2024-10-12 22:06:44.627502] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:26.164 [2024-10-12 22:06:44.627508] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:26.164 [2024-10-12 22:06:44.627513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.628508] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:26.164 [2024-10-12 22:06:44.628514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.629523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:26.164 [2024-10-12 22:06:44.629531] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:26.164 [2024-10-12 22:06:44.629535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.629540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.629643] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:26.164 [2024-10-12 22:06:44.629647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.629651] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:26.164 [2024-10-12 22:06:44.630526] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:26.164 [2024-10-12 22:06:44.631530] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:26.164 [2024-10-12 22:06:44.632531] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:26.164 [2024-10-12 22:06:44.633535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.164 [2024-10-12 22:06:44.633588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:26.164 [2024-10-12 22:06:44.634547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:26.164 [2024-10-12 22:06:44.634552] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:26.164 [2024-10-12 22:06:44.634556] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:26.164 [2024-10-12 22:06:44.634571] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:26.164 [2024-10-12 22:06:44.634576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:26.164 [2024-10-12 22:06:44.634588] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.164 [2024-10-12 22:06:44.634592] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.164 [2024-10-12 22:06:44.634595] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.164 [2024-10-12 22:06:44.634606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.164 [2024-10-12 22:06:44.634645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:26.164 [2024-10-12 22:06:44.634652] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:26.164 [2024-10-12 22:06:44.634656] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:26.164 [2024-10-12 22:06:44.634659] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:26.164 [2024-10-12 22:06:44.634662] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:26.164 [2024-10-12 22:06:44.634667] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:26.164 [2024-10-12 22:06:44.634671] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:26.164 [2024-10-12 22:06:44.634674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:26.164 [2024-10-12 22:06:44.634680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:26.164 [2024-10-12 22:06:44.634687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:26.164 [2024-10-12 22:06:44.634700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:26.164 [2024-10-12 22:06:44.634708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.164 [2024-10-12 22:06:44.634715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.164 [2024-10-12 22:06:44.634721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.164 [2024-10-12 22:06:44.634726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.164 [2024-10-12 22:06:44.634730] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:26.164 [2024-10-12 22:06:44.634736] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.634751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.634755] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:26.165 [2024-10-12 22:06:44.634759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634763] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634770] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.634786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.634829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634834] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634840] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:26.165 [2024-10-12 22:06:44.634843] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:26.165 [2024-10-12 22:06:44.634845] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.634849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.634862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.634868] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:26.165 [2024-10-12 22:06:44.634877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634887] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.165 [2024-10-12 22:06:44.634890] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.165 [2024-10-12 22:06:44.634893] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.634897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.634914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.634924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634934] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.165 [2024-10-12 22:06:44.634937] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.165 [2024-10-12 22:06:44.634939] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.634944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.634953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.634959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634964] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634974] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634986] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:26.165 [2024-10-12 22:06:44.634989] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:26.165 [2024-10-12 22:06:44.634993] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:26.165 [2024-10-12 22:06:44.635007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635071] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:26.165 [2024-10-12 22:06:44.635074] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:26.165 [2024-10-12 22:06:44.635077] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:26.165 [2024-10-12 22:06:44.635079] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:26.165 [2024-10-12 22:06:44.635082] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:26.165 [2024-10-12 22:06:44.635086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:26.165 [2024-10-12 22:06:44.635091] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:26.165 [2024-10-12 22:06:44.635094] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:26.165 [2024-10-12 22:06:44.635097] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.635101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635109] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:26.165 [2024-10-12 22:06:44.635112] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.165 [2024-10-12 22:06:44.635114] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.635119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635124] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:26.165 [2024-10-12 22:06:44.635127] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:26.165 [2024-10-12 22:06:44.635129] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.165 [2024-10-12 22:06:44.635134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:26.165 [2024-10-12 22:06:44.635139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:26.165 [2024-10-12 22:06:44.635161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:26.165 ===================================================== 00:18:26.165 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.165 ===================================================== 00:18:26.165 Controller Capabilities/Features 00:18:26.165 ================================ 00:18:26.165 Vendor ID: 4e58 00:18:26.165 Subsystem Vendor ID: 4e58 00:18:26.165 Serial Number: SPDK1 00:18:26.165 Model Number: SPDK bdev Controller 00:18:26.165 Firmware Version: 24.09.1 00:18:26.165 Recommended Arb Burst: 6 00:18:26.165 IEEE OUI Identifier: 8d 6b 50 00:18:26.165 Multi-path I/O 00:18:26.165 May have multiple subsystem ports: Yes 00:18:26.165 May have multiple controllers: Yes 00:18:26.165 Associated with SR-IOV VF: No 00:18:26.165 Max Data Transfer Size: 131072 00:18:26.165 Max Number of Namespaces: 32 00:18:26.165 Max Number of I/O Queues: 127 00:18:26.165 NVMe Specification Version (VS): 1.3 00:18:26.165 NVMe Specification Version (Identify): 1.3 00:18:26.165 Maximum Queue Entries: 256 00:18:26.165 Contiguous Queues Required: Yes 00:18:26.165 Arbitration Mechanisms Supported 00:18:26.165 Weighted Round Robin: Not Supported 00:18:26.165 Vendor Specific: Not Supported 00:18:26.165 Reset Timeout: 15000 ms 00:18:26.165 Doorbell Stride: 4 bytes 00:18:26.165 NVM Subsystem Reset: Not Supported 00:18:26.165 Command Sets Supported 00:18:26.165 NVM Command Set: Supported 00:18:26.165 Boot Partition: Not Supported 00:18:26.165 Memory Page Size Minimum: 4096 bytes 00:18:26.165 Memory Page Size Maximum: 4096 bytes 00:18:26.165 Persistent Memory Region: Not Supported 00:18:26.165 Optional Asynchronous Events Supported 00:18:26.165 Namespace Attribute Notices: Supported 00:18:26.165 Firmware Activation Notices: Not Supported 00:18:26.165 ANA Change Notices: Not Supported 00:18:26.165 PLE Aggregate Log Change Notices: Not Supported 00:18:26.165 LBA Status Info Alert Notices: Not Supported 00:18:26.165 EGE Aggregate Log Change Notices: Not Supported 00:18:26.165 Normal NVM Subsystem Shutdown event: Not Supported 00:18:26.166 Zone Descriptor Change Notices: Not Supported 00:18:26.166 Discovery Log Change Notices: Not Supported 00:18:26.166 Controller Attributes 00:18:26.166 128-bit Host Identifier: Supported 00:18:26.166 Non-Operational Permissive Mode: Not Supported 00:18:26.166 NVM Sets: Not Supported 00:18:26.166 Read Recovery Levels: Not Supported 00:18:26.166 Endurance Groups: Not Supported 00:18:26.166 Predictable Latency Mode: Not Supported 00:18:26.166 Traffic Based Keep ALive: Not Supported 00:18:26.166 Namespace Granularity: Not Supported 00:18:26.166 SQ Associations: Not Supported 00:18:26.166 UUID List: Not Supported 00:18:26.166 Multi-Domain Subsystem: Not Supported 00:18:26.166 Fixed Capacity Management: Not Supported 00:18:26.166 Variable Capacity Management: Not Supported 00:18:26.166 Delete Endurance Group: Not Supported 00:18:26.166 Delete NVM Set: Not Supported 00:18:26.166 Extended LBA Formats Supported: Not Supported 00:18:26.166 Flexible Data Placement Supported: Not Supported 00:18:26.166 00:18:26.166 Controller Memory Buffer Support 00:18:26.166 ================================ 00:18:26.166 Supported: No 00:18:26.166 00:18:26.166 Persistent Memory Region Support 00:18:26.166 ================================ 00:18:26.166 Supported: No 00:18:26.166 00:18:26.166 Admin Command Set Attributes 00:18:26.166 ============================ 00:18:26.166 Security Send/Receive: Not Supported 00:18:26.166 Format NVM: Not Supported 00:18:26.166 Firmware Activate/Download: Not Supported 00:18:26.166 Namespace Management: Not Supported 00:18:26.166 Device Self-Test: Not Supported 00:18:26.166 Directives: Not Supported 00:18:26.166 NVMe-MI: Not Supported 00:18:26.166 Virtualization Management: Not Supported 00:18:26.166 Doorbell Buffer Config: Not Supported 00:18:26.166 Get LBA Status Capability: Not Supported 00:18:26.166 Command & Feature Lockdown Capability: Not Supported 00:18:26.166 Abort Command Limit: 4 00:18:26.166 Async Event Request Limit: 4 00:18:26.166 Number of Firmware Slots: N/A 00:18:26.166 Firmware Slot 1 Read-Only: N/A 00:18:26.166 Firmware Activation Without Reset: N/A 00:18:26.166 Multiple Update Detection Support: N/A 00:18:26.166 Firmware Update Granularity: No Information Provided 00:18:26.166 Per-Namespace SMART Log: No 00:18:26.166 Asymmetric Namespace Access Log Page: Not Supported 00:18:26.166 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:26.166 Command Effects Log Page: Supported 00:18:26.166 Get Log Page Extended Data: Supported 00:18:26.166 Telemetry Log Pages: Not Supported 00:18:26.166 Persistent Event Log Pages: Not Supported 00:18:26.166 Supported Log Pages Log Page: May Support 00:18:26.166 Commands Supported & Effects Log Page: Not Supported 00:18:26.166 Feature Identifiers & Effects Log Page:May Support 00:18:26.166 NVMe-MI Commands & Effects Log Page: May Support 00:18:26.166 Data Area 4 for Telemetry Log: Not Supported 00:18:26.166 Error Log Page Entries Supported: 128 00:18:26.166 Keep Alive: Supported 00:18:26.166 Keep Alive Granularity: 10000 ms 00:18:26.166 00:18:26.166 NVM Command Set Attributes 00:18:26.166 ========================== 00:18:26.166 Submission Queue Entry Size 00:18:26.166 Max: 64 00:18:26.166 Min: 64 00:18:26.166 Completion Queue Entry Size 00:18:26.166 Max: 16 00:18:26.166 Min: 16 00:18:26.166 Number of Namespaces: 32 00:18:26.166 Compare Command: Supported 00:18:26.166 Write Uncorrectable Command: Not Supported 00:18:26.166 Dataset Management Command: Supported 00:18:26.166 Write Zeroes Command: Supported 00:18:26.166 Set Features Save Field: Not Supported 00:18:26.166 Reservations: Not Supported 00:18:26.166 Timestamp: Not Supported 00:18:26.166 Copy: Supported 00:18:26.166 Volatile Write Cache: Present 00:18:26.166 Atomic Write Unit (Normal): 1 00:18:26.166 Atomic Write Unit (PFail): 1 00:18:26.166 Atomic Compare & Write Unit: 1 00:18:26.166 Fused Compare & Write: Supported 00:18:26.166 Scatter-Gather List 00:18:26.166 SGL Command Set: Supported (Dword aligned) 00:18:26.166 SGL Keyed: Not Supported 00:18:26.166 SGL Bit Bucket Descriptor: Not Supported 00:18:26.166 SGL Metadata Pointer: Not Supported 00:18:26.166 Oversized SGL: Not Supported 00:18:26.166 SGL Metadata Address: Not Supported 00:18:26.166 SGL Offset: Not Supported 00:18:26.166 Transport SGL Data Block: Not Supported 00:18:26.166 Replay Protected Memory Block: Not Supported 00:18:26.166 00:18:26.166 Firmware Slot Information 00:18:26.166 ========================= 00:18:26.166 Active slot: 1 00:18:26.166 Slot 1 Firmware Revision: 24.09.1 00:18:26.166 00:18:26.166 00:18:26.166 Commands Supported and Effects 00:18:26.166 ============================== 00:18:26.166 Admin Commands 00:18:26.166 -------------- 00:18:26.166 Get Log Page (02h): Supported 00:18:26.166 Identify (06h): Supported 00:18:26.166 Abort (08h): Supported 00:18:26.166 Set Features (09h): Supported 00:18:26.166 Get Features (0Ah): Supported 00:18:26.166 Asynchronous Event Request (0Ch): Supported 00:18:26.166 Keep Alive (18h): Supported 00:18:26.166 I/O Commands 00:18:26.166 ------------ 00:18:26.166 Flush (00h): Supported LBA-Change 00:18:26.166 Write (01h): Supported LBA-Change 00:18:26.166 Read (02h): Supported 00:18:26.166 Compare (05h): Supported 00:18:26.166 Write Zeroes (08h): Supported LBA-Change 00:18:26.166 Dataset Management (09h): Supported LBA-Change 00:18:26.166 Copy (19h): Supported LBA-Change 00:18:26.166 00:18:26.166 Error Log 00:18:26.166 ========= 00:18:26.166 00:18:26.166 Arbitration 00:18:26.166 =========== 00:18:26.166 Arbitration Burst: 1 00:18:26.166 00:18:26.166 Power Management 00:18:26.166 ================ 00:18:26.166 Number of Power States: 1 00:18:26.166 Current Power State: Power State #0 00:18:26.166 Power State #0: 00:18:26.166 Max Power: 0.00 W 00:18:26.166 Non-Operational State: Operational 00:18:26.166 Entry Latency: Not Reported 00:18:26.166 Exit Latency: Not Reported 00:18:26.166 Relative Read Throughput: 0 00:18:26.166 Relative Read Latency: 0 00:18:26.166 Relative Write Throughput: 0 00:18:26.166 Relative Write Latency: 0 00:18:26.166 Idle Power: Not Reported 00:18:26.166 Active Power: Not Reported 00:18:26.166 Non-Operational Permissive Mode: Not Supported 00:18:26.166 00:18:26.166 Health Information 00:18:26.166 ================== 00:18:26.166 Critical Warnings: 00:18:26.166 Available Spare Space: OK 00:18:26.166 Temperature: OK 00:18:26.166 Device Reliability: OK 00:18:26.166 Read Only: No 00:18:26.166 Volatile Memory Backup: OK 00:18:26.166 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:26.166 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:26.166 Available Spare: 0% 00:18:26.166 Availabl[2024-10-12 22:06:44.635237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:26.166 [2024-10-12 22:06:44.635248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:26.166 [2024-10-12 22:06:44.635268] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:26.166 [2024-10-12 22:06:44.635275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.166 [2024-10-12 22:06:44.635280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.166 [2024-10-12 22:06:44.635284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.166 [2024-10-12 22:06:44.635289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.166 [2024-10-12 22:06:44.639108] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:26.166 [2024-10-12 22:06:44.639116] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:26.166 [2024-10-12 22:06:44.639571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.166 [2024-10-12 22:06:44.639608] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:26.166 [2024-10-12 22:06:44.639613] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:26.166 [2024-10-12 22:06:44.640576] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:26.166 [2024-10-12 22:06:44.640583] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:26.166 [2024-10-12 22:06:44.640634] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:26.166 [2024-10-12 22:06:44.641594] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.427 e Spare Threshold: 0% 00:18:26.427 Life Percentage Used: 0% 00:18:26.427 Data Units Read: 0 00:18:26.427 Data Units Written: 0 00:18:26.427 Host Read Commands: 0 00:18:26.427 Host Write Commands: 0 00:18:26.427 Controller Busy Time: 0 minutes 00:18:26.427 Power Cycles: 0 00:18:26.427 Power On Hours: 0 hours 00:18:26.427 Unsafe Shutdowns: 0 00:18:26.427 Unrecoverable Media Errors: 0 00:18:26.427 Lifetime Error Log Entries: 0 00:18:26.427 Warning Temperature Time: 0 minutes 00:18:26.427 Critical Temperature Time: 0 minutes 00:18:26.428 00:18:26.428 Number of Queues 00:18:26.428 ================ 00:18:26.428 Number of I/O Submission Queues: 127 00:18:26.428 Number of I/O Completion Queues: 127 00:18:26.428 00:18:26.428 Active Namespaces 00:18:26.428 ================= 00:18:26.428 Namespace ID:1 00:18:26.428 Error Recovery Timeout: Unlimited 00:18:26.428 Command Set Identifier: NVM (00h) 00:18:26.428 Deallocate: Supported 00:18:26.428 Deallocated/Unwritten Error: Not Supported 00:18:26.428 Deallocated Read Value: Unknown 00:18:26.428 Deallocate in Write Zeroes: Not Supported 00:18:26.428 Deallocated Guard Field: 0xFFFF 00:18:26.428 Flush: Supported 00:18:26.428 Reservation: Supported 00:18:26.428 Namespace Sharing Capabilities: Multiple Controllers 00:18:26.428 Size (in LBAs): 131072 (0GiB) 00:18:26.428 Capacity (in LBAs): 131072 (0GiB) 00:18:26.428 Utilization (in LBAs): 131072 (0GiB) 00:18:26.428 NGUID: 3B3D4455E7F045EB8E36245DF376C152 00:18:26.428 UUID: 3b3d4455-e7f0-45eb-8e36-245df376c152 00:18:26.428 Thin Provisioning: Not Supported 00:18:26.428 Per-NS Atomic Units: Yes 00:18:26.428 Atomic Boundary Size (Normal): 0 00:18:26.428 Atomic Boundary Size (PFail): 0 00:18:26.428 Atomic Boundary Offset: 0 00:18:26.428 Maximum Single Source Range Length: 65535 00:18:26.428 Maximum Copy Length: 65535 00:18:26.428 Maximum Source Range Count: 1 00:18:26.428 NGUID/EUI64 Never Reused: No 00:18:26.428 Namespace Write Protected: No 00:18:26.428 Number of LBA Formats: 1 00:18:26.428 Current LBA Format: LBA Format #00 00:18:26.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:26.428 00:18:26.428 22:06:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:26.428 [2024-10-12 22:06:44.805731] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.713 Initializing NVMe Controllers 00:18:31.713 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:31.713 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:31.713 Initialization complete. Launching workers. 00:18:31.714 ======================================================== 00:18:31.714 Latency(us) 00:18:31.714 Device Information : IOPS MiB/s Average min max 00:18:31.714 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40022.20 156.34 3198.12 837.31 7782.71 00:18:31.714 ======================================================== 00:18:31.714 Total : 40022.20 156.34 3198.12 837.31 7782.71 00:18:31.714 00:18:31.714 [2024-10-12 22:06:49.825692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.714 22:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:31.714 [2024-10-12 22:06:50.002549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.001 Initializing NVMe Controllers 00:18:37.001 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:37.001 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:37.001 Initialization complete. Launching workers. 00:18:37.001 ======================================================== 00:18:37.001 Latency(us) 00:18:37.001 Device Information : IOPS MiB/s Average min max 00:18:37.001 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15905.00 62.13 8057.07 4990.78 15961.96 00:18:37.001 ======================================================== 00:18:37.001 Total : 15905.00 62.13 8057.07 4990.78 15961.96 00:18:37.001 00:18:37.001 [2024-10-12 22:06:55.037373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.001 22:06:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:37.001 [2024-10-12 22:06:55.223204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:42.287 [2024-10-12 22:07:00.323449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:42.287 Initializing NVMe Controllers 00:18:42.287 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:42.288 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:42.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:42.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:42.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:42.288 Initialization complete. Launching workers. 00:18:42.288 Starting thread on core 2 00:18:42.288 Starting thread on core 3 00:18:42.288 Starting thread on core 1 00:18:42.288 22:07:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:42.288 [2024-10-12 22:07:00.559462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.586 [2024-10-12 22:07:03.614666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.586 Initializing NVMe Controllers 00:18:45.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.586 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:45.586 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:45.586 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:45.586 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:45.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:45.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:45.586 Initialization complete. Launching workers. 00:18:45.586 Starting thread on core 1 with urgent priority queue 00:18:45.586 Starting thread on core 2 with urgent priority queue 00:18:45.586 Starting thread on core 3 with urgent priority queue 00:18:45.586 Starting thread on core 0 with urgent priority queue 00:18:45.586 SPDK bdev Controller (SPDK1 ) core 0: 11891.00 IO/s 8.41 secs/100000 ios 00:18:45.586 SPDK bdev Controller (SPDK1 ) core 1: 10185.00 IO/s 9.82 secs/100000 ios 00:18:45.586 SPDK bdev Controller (SPDK1 ) core 2: 11283.33 IO/s 8.86 secs/100000 ios 00:18:45.586 SPDK bdev Controller (SPDK1 ) core 3: 10785.67 IO/s 9.27 secs/100000 ios 00:18:45.586 ======================================================== 00:18:45.586 00:18:45.586 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:45.586 [2024-10-12 22:07:03.837524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.586 Initializing NVMe Controllers 00:18:45.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:45.586 Namespace ID: 1 size: 0GB 00:18:45.586 Initialization complete. 00:18:45.586 INFO: using host memory buffer for IO 00:18:45.586 Hello world! 00:18:45.586 [2024-10-12 22:07:03.871726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.586 22:07:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:45.847 [2024-10-12 22:07:04.094519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:46.787 Initializing NVMe Controllers 00:18:46.787 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.787 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.787 Initialization complete. Launching workers. 00:18:46.787 submit (in ns) avg, min, max = 5711.5, 2820.0, 3998431.7 00:18:46.787 complete (in ns) avg, min, max = 16458.0, 1641.7, 3997058.3 00:18:46.787 00:18:46.787 Submit histogram 00:18:46.787 ================ 00:18:46.787 Range in us Cumulative Count 00:18:46.787 2.813 - 2.827: 0.1869% ( 38) 00:18:46.787 2.827 - 2.840: 1.4115% ( 249) 00:18:46.787 2.840 - 2.853: 3.2066% ( 365) 00:18:46.787 2.853 - 2.867: 7.6132% ( 896) 00:18:46.787 2.867 - 2.880: 13.8347% ( 1265) 00:18:46.787 2.880 - 2.893: 20.2675% ( 1308) 00:18:46.787 2.893 - 2.907: 27.7332% ( 1518) 00:18:46.787 2.907 - 2.920: 33.4973% ( 1172) 00:18:46.787 2.920 - 2.933: 38.4596% ( 1009) 00:18:46.787 2.933 - 2.947: 42.7876% ( 880) 00:18:46.787 2.947 - 2.960: 47.9860% ( 1057) 00:18:46.787 2.960 - 2.973: 53.5533% ( 1132) 00:18:46.787 2.973 - 2.987: 61.7715% ( 1671) 00:18:46.787 2.987 - 3.000: 71.7208% ( 2023) 00:18:46.787 3.000 - 3.013: 80.4653% ( 1778) 00:18:46.787 3.013 - 3.027: 86.9768% ( 1324) 00:18:46.787 3.027 - 3.040: 92.3376% ( 1090) 00:18:46.787 3.040 - 3.053: 96.2032% ( 786) 00:18:46.787 3.053 - 3.067: 98.3131% ( 429) 00:18:46.787 3.067 - 3.080: 99.2082% ( 182) 00:18:46.787 3.080 - 3.093: 99.5328% ( 66) 00:18:46.787 3.093 - 3.107: 99.6164% ( 17) 00:18:46.787 3.107 - 3.120: 99.6410% ( 5) 00:18:46.787 3.120 - 3.133: 99.6607% ( 4) 00:18:46.787 3.133 - 3.147: 99.6656% ( 1) 00:18:46.787 3.187 - 3.200: 99.6705% ( 1) 00:18:46.787 3.253 - 3.267: 99.6754% ( 1) 00:18:46.787 3.267 - 3.280: 99.6803% ( 1) 00:18:46.787 3.373 - 3.387: 99.6852% ( 1) 00:18:46.787 3.467 - 3.493: 99.6902% ( 1) 00:18:46.787 3.520 - 3.547: 99.6951% ( 1) 00:18:46.787 3.627 - 3.653: 99.7000% ( 1) 00:18:46.787 3.893 - 3.920: 99.7049% ( 1) 00:18:46.787 4.133 - 4.160: 99.7147% ( 2) 00:18:46.787 4.347 - 4.373: 99.7197% ( 1) 00:18:46.787 4.533 - 4.560: 99.7246% ( 1) 00:18:46.787 4.640 - 4.667: 99.7295% ( 1) 00:18:46.787 4.747 - 4.773: 99.7344% ( 1) 00:18:46.787 4.800 - 4.827: 99.7393% ( 1) 00:18:46.787 4.880 - 4.907: 99.7443% ( 1) 00:18:46.787 4.960 - 4.987: 99.7492% ( 1) 00:18:46.787 4.987 - 5.013: 99.7590% ( 2) 00:18:46.787 5.013 - 5.040: 99.7688% ( 2) 00:18:46.787 5.093 - 5.120: 99.7787% ( 2) 00:18:46.787 5.120 - 5.147: 99.7836% ( 1) 00:18:46.787 5.147 - 5.173: 99.7885% ( 1) 00:18:46.787 5.173 - 5.200: 99.7934% ( 1) 00:18:46.787 5.280 - 5.307: 99.7984% ( 1) 00:18:46.787 5.387 - 5.413: 99.8033% ( 1) 00:18:46.787 5.413 - 5.440: 99.8082% ( 1) 00:18:46.787 5.440 - 5.467: 99.8131% ( 1) 00:18:46.787 5.467 - 5.493: 99.8328% ( 4) 00:18:46.787 5.493 - 5.520: 99.8377% ( 1) 00:18:46.787 5.573 - 5.600: 99.8475% ( 2) 00:18:46.787 5.627 - 5.653: 99.8525% ( 1) 00:18:46.787 5.653 - 5.680: 99.8574% ( 1) 00:18:46.787 5.680 - 5.707: 99.8623% ( 1) 00:18:46.787 5.733 - 5.760: 99.8672% ( 1) 00:18:46.787 5.760 - 5.787: 99.8770% ( 2) 00:18:46.787 5.840 - 5.867: 99.8820% ( 1) 00:18:46.787 5.867 - 5.893: 99.8869% ( 1) 00:18:46.787 5.947 - 5.973: 99.8918% ( 1) 00:18:46.787 6.240 - 6.267: 99.8967% ( 1) 00:18:46.787 6.347 - 6.373: 99.9066% ( 2) 00:18:46.787 6.400 - 6.427: 99.9115% ( 1) 00:18:46.787 6.480 - 6.507: 99.9164% ( 1) 00:18:46.787 6.533 - 6.560: 99.9213% ( 1) 00:18:46.787 6.987 - 7.040: 99.9262% ( 1) 00:18:46.787 7.040 - 7.093: 99.9311% ( 1) 00:18:46.787 3986.773 - 4014.080: 100.0000% ( 14) 00:18:46.787 00:18:46.787 Complete histogram 00:18:46.787 ================== 00:18:46.787 Range in us Cumulative Count 00:18:46.787 1.640 - 1.647: 0.4721% ( 96) 00:18:46.787 1.647 - 1.653: 0.9689% ( 101) 00:18:46.787 1.653 - 1.660: 1.0082% ( 8) 00:18:46.788 1.660 - 1.667: 1.1164% ( 22) 00:18:46.788 1.667 - 1.673: 1.1803% ( 13) 00:18:46.788 1.673 - 1.680: 1.2394% ( 12) 00:18:46.788 1.680 - 1.687: 1.2541% ( 3) 00:18:46.788 1.687 - [2024-10-12 22:07:05.115268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:46.788 1.693: 1.2738% ( 4) 00:18:46.788 1.693 - 1.700: 9.3247% ( 1637) 00:18:46.788 1.700 - 1.707: 46.4467% ( 7548) 00:18:46.788 1.707 - 1.720: 65.8634% ( 3948) 00:18:46.788 1.720 - 1.733: 79.6685% ( 2807) 00:18:46.788 1.733 - 1.747: 83.1850% ( 715) 00:18:46.788 1.747 - 1.760: 84.3309% ( 233) 00:18:46.788 1.760 - 1.773: 89.2982% ( 1010) 00:18:46.788 1.773 - 1.787: 94.8802% ( 1135) 00:18:46.788 1.787 - 1.800: 97.7918% ( 592) 00:18:46.788 1.800 - 1.813: 99.0754% ( 261) 00:18:46.788 1.813 - 1.827: 99.4295% ( 72) 00:18:46.788 1.827 - 1.840: 99.4738% ( 9) 00:18:46.788 1.853 - 1.867: 99.4836% ( 2) 00:18:46.788 2.053 - 2.067: 99.4885% ( 1) 00:18:46.788 3.493 - 3.520: 99.4984% ( 2) 00:18:46.788 3.627 - 3.653: 99.5033% ( 1) 00:18:46.788 3.680 - 3.707: 99.5082% ( 1) 00:18:46.788 4.027 - 4.053: 99.5131% ( 1) 00:18:46.788 4.080 - 4.107: 99.5229% ( 2) 00:18:46.788 4.133 - 4.160: 99.5377% ( 3) 00:18:46.788 4.293 - 4.320: 99.5426% ( 1) 00:18:46.788 4.347 - 4.373: 99.5475% ( 1) 00:18:46.788 4.427 - 4.453: 99.5525% ( 1) 00:18:46.788 4.480 - 4.507: 99.5623% ( 2) 00:18:46.788 4.613 - 4.640: 99.5672% ( 1) 00:18:46.788 4.640 - 4.667: 99.5721% ( 1) 00:18:46.788 4.880 - 4.907: 99.5770% ( 1) 00:18:46.788 4.933 - 4.960: 99.5820% ( 1) 00:18:46.788 5.040 - 5.067: 99.5869% ( 1) 00:18:46.788 5.147 - 5.173: 99.5918% ( 1) 00:18:46.788 5.227 - 5.253: 99.5967% ( 1) 00:18:46.788 5.413 - 5.440: 99.6016% ( 1) 00:18:46.788 7.253 - 7.307: 99.6066% ( 1) 00:18:46.788 7.947 - 8.000: 99.6115% ( 1) 00:18:46.788 9.227 - 9.280: 99.6164% ( 1) 00:18:46.788 9.440 - 9.493: 99.6213% ( 1) 00:18:46.788 11.200 - 11.253: 99.6262% ( 1) 00:18:46.788 11.520 - 11.573: 99.6311% ( 1) 00:18:46.788 3986.773 - 4014.080: 100.0000% ( 75) 00:18:46.788 00:18:46.788 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:46.788 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:46.788 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:46.788 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:46.788 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:47.048 [ 00:18:47.048 { 00:18:47.048 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:47.048 "subtype": "Discovery", 00:18:47.048 "listen_addresses": [], 00:18:47.048 "allow_any_host": true, 00:18:47.048 "hosts": [] 00:18:47.048 }, 00:18:47.048 { 00:18:47.048 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:47.048 "subtype": "NVMe", 00:18:47.048 "listen_addresses": [ 00:18:47.048 { 00:18:47.048 "trtype": "VFIOUSER", 00:18:47.048 "adrfam": "IPv4", 00:18:47.048 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:47.048 "trsvcid": "0" 00:18:47.048 } 00:18:47.048 ], 00:18:47.048 "allow_any_host": true, 00:18:47.048 "hosts": [], 00:18:47.048 "serial_number": "SPDK1", 00:18:47.048 "model_number": "SPDK bdev Controller", 00:18:47.048 "max_namespaces": 32, 00:18:47.048 "min_cntlid": 1, 00:18:47.048 "max_cntlid": 65519, 00:18:47.048 "namespaces": [ 00:18:47.048 { 00:18:47.048 "nsid": 1, 00:18:47.048 "bdev_name": "Malloc1", 00:18:47.048 "name": "Malloc1", 00:18:47.048 "nguid": "3B3D4455E7F045EB8E36245DF376C152", 00:18:47.048 "uuid": "3b3d4455-e7f0-45eb-8e36-245df376c152" 00:18:47.048 } 00:18:47.048 ] 00:18:47.048 }, 00:18:47.048 { 00:18:47.048 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:47.048 "subtype": "NVMe", 00:18:47.048 "listen_addresses": [ 00:18:47.048 { 00:18:47.048 "trtype": "VFIOUSER", 00:18:47.048 "adrfam": "IPv4", 00:18:47.048 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:47.048 "trsvcid": "0" 00:18:47.048 } 00:18:47.048 ], 00:18:47.048 "allow_any_host": true, 00:18:47.048 "hosts": [], 00:18:47.048 "serial_number": "SPDK2", 00:18:47.048 "model_number": "SPDK bdev Controller", 00:18:47.048 "max_namespaces": 32, 00:18:47.048 "min_cntlid": 1, 00:18:47.048 "max_cntlid": 65519, 00:18:47.048 "namespaces": [ 00:18:47.048 { 00:18:47.048 "nsid": 1, 00:18:47.048 "bdev_name": "Malloc2", 00:18:47.048 "name": "Malloc2", 00:18:47.048 "nguid": "CC309581D2DC4AE2A72FF0628F305562", 00:18:47.048 "uuid": "cc309581-d2dc-4ae2-a72f-f0628f305562" 00:18:47.048 } 00:18:47.048 ] 00:18:47.048 } 00:18:47.048 ] 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3488976 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:47.048 [2024-10-12 22:07:05.480513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.048 Malloc3 00:18:47.048 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:47.309 [2024-10-12 22:07:05.681843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.309 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:47.309 Asynchronous Event Request test 00:18:47.309 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:47.309 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:47.309 Registering asynchronous event callbacks... 00:18:47.309 Starting namespace attribute notice tests for all controllers... 00:18:47.309 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:47.309 aer_cb - Changed Namespace 00:18:47.309 Cleaning up... 00:18:47.572 [ 00:18:47.572 { 00:18:47.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:47.572 "subtype": "Discovery", 00:18:47.572 "listen_addresses": [], 00:18:47.572 "allow_any_host": true, 00:18:47.572 "hosts": [] 00:18:47.572 }, 00:18:47.572 { 00:18:47.572 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:47.572 "subtype": "NVMe", 00:18:47.572 "listen_addresses": [ 00:18:47.572 { 00:18:47.572 "trtype": "VFIOUSER", 00:18:47.572 "adrfam": "IPv4", 00:18:47.572 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:47.572 "trsvcid": "0" 00:18:47.572 } 00:18:47.572 ], 00:18:47.572 "allow_any_host": true, 00:18:47.572 "hosts": [], 00:18:47.572 "serial_number": "SPDK1", 00:18:47.572 "model_number": "SPDK bdev Controller", 00:18:47.572 "max_namespaces": 32, 00:18:47.572 "min_cntlid": 1, 00:18:47.572 "max_cntlid": 65519, 00:18:47.572 "namespaces": [ 00:18:47.572 { 00:18:47.572 "nsid": 1, 00:18:47.572 "bdev_name": "Malloc1", 00:18:47.572 "name": "Malloc1", 00:18:47.572 "nguid": "3B3D4455E7F045EB8E36245DF376C152", 00:18:47.572 "uuid": "3b3d4455-e7f0-45eb-8e36-245df376c152" 00:18:47.572 }, 00:18:47.572 { 00:18:47.572 "nsid": 2, 00:18:47.572 "bdev_name": "Malloc3", 00:18:47.572 "name": "Malloc3", 00:18:47.572 "nguid": "52F8B60F43154E0DB7809F622B9CDADE", 00:18:47.572 "uuid": "52f8b60f-4315-4e0d-b780-9f622b9cdade" 00:18:47.572 } 00:18:47.572 ] 00:18:47.572 }, 00:18:47.572 { 00:18:47.572 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:47.572 "subtype": "NVMe", 00:18:47.572 "listen_addresses": [ 00:18:47.572 { 00:18:47.572 "trtype": "VFIOUSER", 00:18:47.572 "adrfam": "IPv4", 00:18:47.572 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:47.572 "trsvcid": "0" 00:18:47.572 } 00:18:47.572 ], 00:18:47.572 "allow_any_host": true, 00:18:47.572 "hosts": [], 00:18:47.572 "serial_number": "SPDK2", 00:18:47.572 "model_number": "SPDK bdev Controller", 00:18:47.572 "max_namespaces": 32, 00:18:47.572 "min_cntlid": 1, 00:18:47.572 "max_cntlid": 65519, 00:18:47.572 "namespaces": [ 00:18:47.572 { 00:18:47.572 "nsid": 1, 00:18:47.572 "bdev_name": "Malloc2", 00:18:47.572 "name": "Malloc2", 00:18:47.572 "nguid": "CC309581D2DC4AE2A72FF0628F305562", 00:18:47.572 "uuid": "cc309581-d2dc-4ae2-a72f-f0628f305562" 00:18:47.572 } 00:18:47.572 ] 00:18:47.572 } 00:18:47.572 ] 00:18:47.572 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3488976 00:18:47.572 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:47.572 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:47.572 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:47.572 22:07:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:47.572 [2024-10-12 22:07:05.918248] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:47.572 [2024-10-12 22:07:05.918297] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489145 ] 00:18:47.572 [2024-10-12 22:07:05.949123] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:47.572 [2024-10-12 22:07:05.959316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:47.572 [2024-10-12 22:07:05.959334] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f412e829000 00:18:47.572 [2024-10-12 22:07:05.960319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.961327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.962329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.963340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.964343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.965352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.966360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.967363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:47.572 [2024-10-12 22:07:05.968377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:47.572 [2024-10-12 22:07:05.968386] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f412d533000 00:18:47.572 [2024-10-12 22:07:05.969298] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:47.572 [2024-10-12 22:07:05.977674] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:47.572 [2024-10-12 22:07:05.977694] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:47.572 [2024-10-12 22:07:05.982765] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:47.572 [2024-10-12 22:07:05.982799] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:47.572 [2024-10-12 22:07:05.982859] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:47.572 [2024-10-12 22:07:05.982872] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:47.572 [2024-10-12 22:07:05.982878] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:47.572 [2024-10-12 22:07:05.983767] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:47.572 [2024-10-12 22:07:05.983775] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:47.572 [2024-10-12 22:07:05.983780] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:47.572 [2024-10-12 22:07:05.984773] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:47.572 [2024-10-12 22:07:05.984779] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:47.572 [2024-10-12 22:07:05.984784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:47.572 [2024-10-12 22:07:05.985775] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:47.572 [2024-10-12 22:07:05.985784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:47.572 [2024-10-12 22:07:05.986786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:47.572 [2024-10-12 22:07:05.986792] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:47.572 [2024-10-12 22:07:05.986795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:47.572 [2024-10-12 22:07:05.986800] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:47.572 [2024-10-12 22:07:05.986904] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:47.572 [2024-10-12 22:07:05.986907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:47.573 [2024-10-12 22:07:05.986911] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:47.573 [2024-10-12 22:07:05.987797] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:47.573 [2024-10-12 22:07:05.988798] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:47.573 [2024-10-12 22:07:05.989805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:47.573 [2024-10-12 22:07:05.990804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.573 [2024-10-12 22:07:05.990836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:47.573 [2024-10-12 22:07:05.991815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:47.573 [2024-10-12 22:07:05.991823] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:47.573 [2024-10-12 22:07:05.991826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:05.991843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:47.573 [2024-10-12 22:07:05.991849] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:05.991857] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.573 [2024-10-12 22:07:05.991861] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.573 [2024-10-12 22:07:05.991864] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.573 [2024-10-12 22:07:05.991874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:05.999109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:05.999118] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:47.573 [2024-10-12 22:07:05.999122] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:47.573 [2024-10-12 22:07:05.999125] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:47.573 [2024-10-12 22:07:05.999129] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:47.573 [2024-10-12 22:07:05.999132] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:47.573 [2024-10-12 22:07:05.999135] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:47.573 [2024-10-12 22:07:05.999139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:05.999144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:05.999152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.007107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.007118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.573 [2024-10-12 22:07:06.007124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.573 [2024-10-12 22:07:06.007130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.573 [2024-10-12 22:07:06.007136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.573 [2024-10-12 22:07:06.007139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.007146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.007153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.015115] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:47.573 [2024-10-12 22:07:06.015121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.015126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.015131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.015138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.023107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.023154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.023160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.023165] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:47.573 [2024-10-12 22:07:06.023169] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:47.573 [2024-10-12 22:07:06.023171] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.573 [2024-10-12 22:07:06.023176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.031109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.031118] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:47.573 [2024-10-12 22:07:06.031129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.031134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.031139] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.573 [2024-10-12 22:07:06.031142] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.573 [2024-10-12 22:07:06.031145] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.573 [2024-10-12 22:07:06.031149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.039109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.039121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.039127] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.039132] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:47.573 [2024-10-12 22:07:06.039135] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.573 [2024-10-12 22:07:06.039137] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.573 [2024-10-12 22:07:06.039141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.047108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.047115] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047120] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047138] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047141] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:47.573 [2024-10-12 22:07:06.047145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:47.573 [2024-10-12 22:07:06.047148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:47.573 [2024-10-12 22:07:06.047162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:47.573 [2024-10-12 22:07:06.055107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:47.573 [2024-10-12 22:07:06.055118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.063107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.063119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.071108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.071118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.079107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.079122] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:47.835 [2024-10-12 22:07:06.079125] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:47.835 [2024-10-12 22:07:06.079128] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:47.835 [2024-10-12 22:07:06.079130] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:47.835 [2024-10-12 22:07:06.079133] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:47.835 [2024-10-12 22:07:06.079137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:47.835 [2024-10-12 22:07:06.079142] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:47.835 [2024-10-12 22:07:06.079145] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:47.835 [2024-10-12 22:07:06.079148] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.835 [2024-10-12 22:07:06.079154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.079159] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:47.835 [2024-10-12 22:07:06.079162] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:47.835 [2024-10-12 22:07:06.079165] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.835 [2024-10-12 22:07:06.079169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.079174] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:47.835 [2024-10-12 22:07:06.079177] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:47.835 [2024-10-12 22:07:06.079180] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:47.835 [2024-10-12 22:07:06.079184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:47.835 [2024-10-12 22:07:06.085297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.085311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.085319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:47.835 [2024-10-12 22:07:06.085324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:47.835 ===================================================== 00:18:47.835 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:47.835 ===================================================== 00:18:47.835 Controller Capabilities/Features 00:18:47.835 ================================ 00:18:47.835 Vendor ID: 4e58 00:18:47.835 Subsystem Vendor ID: 4e58 00:18:47.835 Serial Number: SPDK2 00:18:47.835 Model Number: SPDK bdev Controller 00:18:47.835 Firmware Version: 24.09.1 00:18:47.835 Recommended Arb Burst: 6 00:18:47.835 IEEE OUI Identifier: 8d 6b 50 00:18:47.835 Multi-path I/O 00:18:47.835 May have multiple subsystem ports: Yes 00:18:47.835 May have multiple controllers: Yes 00:18:47.835 Associated with SR-IOV VF: No 00:18:47.835 Max Data Transfer Size: 131072 00:18:47.835 Max Number of Namespaces: 32 00:18:47.835 Max Number of I/O Queues: 127 00:18:47.835 NVMe Specification Version (VS): 1.3 00:18:47.835 NVMe Specification Version (Identify): 1.3 00:18:47.835 Maximum Queue Entries: 256 00:18:47.835 Contiguous Queues Required: Yes 00:18:47.835 Arbitration Mechanisms Supported 00:18:47.835 Weighted Round Robin: Not Supported 00:18:47.835 Vendor Specific: Not Supported 00:18:47.835 Reset Timeout: 15000 ms 00:18:47.835 Doorbell Stride: 4 bytes 00:18:47.835 NVM Subsystem Reset: Not Supported 00:18:47.835 Command Sets Supported 00:18:47.835 NVM Command Set: Supported 00:18:47.835 Boot Partition: Not Supported 00:18:47.835 Memory Page Size Minimum: 4096 bytes 00:18:47.835 Memory Page Size Maximum: 4096 bytes 00:18:47.835 Persistent Memory Region: Not Supported 00:18:47.836 Optional Asynchronous Events Supported 00:18:47.836 Namespace Attribute Notices: Supported 00:18:47.836 Firmware Activation Notices: Not Supported 00:18:47.836 ANA Change Notices: Not Supported 00:18:47.836 PLE Aggregate Log Change Notices: Not Supported 00:18:47.836 LBA Status Info Alert Notices: Not Supported 00:18:47.836 EGE Aggregate Log Change Notices: Not Supported 00:18:47.836 Normal NVM Subsystem Shutdown event: Not Supported 00:18:47.836 Zone Descriptor Change Notices: Not Supported 00:18:47.836 Discovery Log Change Notices: Not Supported 00:18:47.836 Controller Attributes 00:18:47.836 128-bit Host Identifier: Supported 00:18:47.836 Non-Operational Permissive Mode: Not Supported 00:18:47.836 NVM Sets: Not Supported 00:18:47.836 Read Recovery Levels: Not Supported 00:18:47.836 Endurance Groups: Not Supported 00:18:47.836 Predictable Latency Mode: Not Supported 00:18:47.836 Traffic Based Keep ALive: Not Supported 00:18:47.836 Namespace Granularity: Not Supported 00:18:47.836 SQ Associations: Not Supported 00:18:47.836 UUID List: Not Supported 00:18:47.836 Multi-Domain Subsystem: Not Supported 00:18:47.836 Fixed Capacity Management: Not Supported 00:18:47.836 Variable Capacity Management: Not Supported 00:18:47.836 Delete Endurance Group: Not Supported 00:18:47.836 Delete NVM Set: Not Supported 00:18:47.836 Extended LBA Formats Supported: Not Supported 00:18:47.836 Flexible Data Placement Supported: Not Supported 00:18:47.836 00:18:47.836 Controller Memory Buffer Support 00:18:47.836 ================================ 00:18:47.836 Supported: No 00:18:47.836 00:18:47.836 Persistent Memory Region Support 00:18:47.836 ================================ 00:18:47.836 Supported: No 00:18:47.836 00:18:47.836 Admin Command Set Attributes 00:18:47.836 ============================ 00:18:47.836 Security Send/Receive: Not Supported 00:18:47.836 Format NVM: Not Supported 00:18:47.836 Firmware Activate/Download: Not Supported 00:18:47.836 Namespace Management: Not Supported 00:18:47.836 Device Self-Test: Not Supported 00:18:47.836 Directives: Not Supported 00:18:47.836 NVMe-MI: Not Supported 00:18:47.836 Virtualization Management: Not Supported 00:18:47.836 Doorbell Buffer Config: Not Supported 00:18:47.836 Get LBA Status Capability: Not Supported 00:18:47.836 Command & Feature Lockdown Capability: Not Supported 00:18:47.836 Abort Command Limit: 4 00:18:47.836 Async Event Request Limit: 4 00:18:47.836 Number of Firmware Slots: N/A 00:18:47.836 Firmware Slot 1 Read-Only: N/A 00:18:47.836 Firmware Activation Without Reset: N/A 00:18:47.836 Multiple Update Detection Support: N/A 00:18:47.836 Firmware Update Granularity: No Information Provided 00:18:47.836 Per-Namespace SMART Log: No 00:18:47.836 Asymmetric Namespace Access Log Page: Not Supported 00:18:47.836 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:47.836 Command Effects Log Page: Supported 00:18:47.836 Get Log Page Extended Data: Supported 00:18:47.836 Telemetry Log Pages: Not Supported 00:18:47.836 Persistent Event Log Pages: Not Supported 00:18:47.836 Supported Log Pages Log Page: May Support 00:18:47.836 Commands Supported & Effects Log Page: Not Supported 00:18:47.836 Feature Identifiers & Effects Log Page:May Support 00:18:47.836 NVMe-MI Commands & Effects Log Page: May Support 00:18:47.836 Data Area 4 for Telemetry Log: Not Supported 00:18:47.836 Error Log Page Entries Supported: 128 00:18:47.836 Keep Alive: Supported 00:18:47.836 Keep Alive Granularity: 10000 ms 00:18:47.836 00:18:47.836 NVM Command Set Attributes 00:18:47.836 ========================== 00:18:47.836 Submission Queue Entry Size 00:18:47.836 Max: 64 00:18:47.836 Min: 64 00:18:47.836 Completion Queue Entry Size 00:18:47.836 Max: 16 00:18:47.836 Min: 16 00:18:47.836 Number of Namespaces: 32 00:18:47.836 Compare Command: Supported 00:18:47.836 Write Uncorrectable Command: Not Supported 00:18:47.836 Dataset Management Command: Supported 00:18:47.836 Write Zeroes Command: Supported 00:18:47.836 Set Features Save Field: Not Supported 00:18:47.836 Reservations: Not Supported 00:18:47.836 Timestamp: Not Supported 00:18:47.836 Copy: Supported 00:18:47.836 Volatile Write Cache: Present 00:18:47.836 Atomic Write Unit (Normal): 1 00:18:47.836 Atomic Write Unit (PFail): 1 00:18:47.836 Atomic Compare & Write Unit: 1 00:18:47.836 Fused Compare & Write: Supported 00:18:47.836 Scatter-Gather List 00:18:47.836 SGL Command Set: Supported (Dword aligned) 00:18:47.836 SGL Keyed: Not Supported 00:18:47.836 SGL Bit Bucket Descriptor: Not Supported 00:18:47.836 SGL Metadata Pointer: Not Supported 00:18:47.836 Oversized SGL: Not Supported 00:18:47.836 SGL Metadata Address: Not Supported 00:18:47.836 SGL Offset: Not Supported 00:18:47.836 Transport SGL Data Block: Not Supported 00:18:47.836 Replay Protected Memory Block: Not Supported 00:18:47.836 00:18:47.836 Firmware Slot Information 00:18:47.836 ========================= 00:18:47.836 Active slot: 1 00:18:47.836 Slot 1 Firmware Revision: 24.09.1 00:18:47.836 00:18:47.836 00:18:47.836 Commands Supported and Effects 00:18:47.836 ============================== 00:18:47.836 Admin Commands 00:18:47.836 -------------- 00:18:47.836 Get Log Page (02h): Supported 00:18:47.836 Identify (06h): Supported 00:18:47.836 Abort (08h): Supported 00:18:47.836 Set Features (09h): Supported 00:18:47.836 Get Features (0Ah): Supported 00:18:47.836 Asynchronous Event Request (0Ch): Supported 00:18:47.836 Keep Alive (18h): Supported 00:18:47.836 I/O Commands 00:18:47.836 ------------ 00:18:47.836 Flush (00h): Supported LBA-Change 00:18:47.836 Write (01h): Supported LBA-Change 00:18:47.836 Read (02h): Supported 00:18:47.836 Compare (05h): Supported 00:18:47.836 Write Zeroes (08h): Supported LBA-Change 00:18:47.836 Dataset Management (09h): Supported LBA-Change 00:18:47.836 Copy (19h): Supported LBA-Change 00:18:47.836 00:18:47.836 Error Log 00:18:47.836 ========= 00:18:47.836 00:18:47.836 Arbitration 00:18:47.836 =========== 00:18:47.836 Arbitration Burst: 1 00:18:47.836 00:18:47.836 Power Management 00:18:47.836 ================ 00:18:47.836 Number of Power States: 1 00:18:47.836 Current Power State: Power State #0 00:18:47.836 Power State #0: 00:18:47.836 Max Power: 0.00 W 00:18:47.836 Non-Operational State: Operational 00:18:47.836 Entry Latency: Not Reported 00:18:47.836 Exit Latency: Not Reported 00:18:47.836 Relative Read Throughput: 0 00:18:47.836 Relative Read Latency: 0 00:18:47.836 Relative Write Throughput: 0 00:18:47.836 Relative Write Latency: 0 00:18:47.836 Idle Power: Not Reported 00:18:47.836 Active Power: Not Reported 00:18:47.836 Non-Operational Permissive Mode: Not Supported 00:18:47.836 00:18:47.836 Health Information 00:18:47.836 ================== 00:18:47.836 Critical Warnings: 00:18:47.836 Available Spare Space: OK 00:18:47.836 Temperature: OK 00:18:47.836 Device Reliability: OK 00:18:47.836 Read Only: No 00:18:47.836 Volatile Memory Backup: OK 00:18:47.836 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:47.836 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:47.836 Available Spare: 0% 00:18:47.836 Availabl[2024-10-12 22:07:06.085396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:47.836 [2024-10-12 22:07:06.094108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:47.836 [2024-10-12 22:07:06.094133] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:47.836 [2024-10-12 22:07:06.094140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.836 [2024-10-12 22:07:06.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.836 [2024-10-12 22:07:06.094149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.836 [2024-10-12 22:07:06.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.836 [2024-10-12 22:07:06.094182] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:47.836 [2024-10-12 22:07:06.094190] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:47.836 [2024-10-12 22:07:06.095187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.836 [2024-10-12 22:07:06.095223] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:47.836 [2024-10-12 22:07:06.095228] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:47.836 [2024-10-12 22:07:06.096188] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:47.836 [2024-10-12 22:07:06.096197] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:47.836 [2024-10-12 22:07:06.096244] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:47.836 [2024-10-12 22:07:06.097214] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:47.836 e Spare Threshold: 0% 00:18:47.836 Life Percentage Used: 0% 00:18:47.836 Data Units Read: 0 00:18:47.836 Data Units Written: 0 00:18:47.836 Host Read Commands: 0 00:18:47.836 Host Write Commands: 0 00:18:47.837 Controller Busy Time: 0 minutes 00:18:47.837 Power Cycles: 0 00:18:47.837 Power On Hours: 0 hours 00:18:47.837 Unsafe Shutdowns: 0 00:18:47.837 Unrecoverable Media Errors: 0 00:18:47.837 Lifetime Error Log Entries: 0 00:18:47.837 Warning Temperature Time: 0 minutes 00:18:47.837 Critical Temperature Time: 0 minutes 00:18:47.837 00:18:47.837 Number of Queues 00:18:47.837 ================ 00:18:47.837 Number of I/O Submission Queues: 127 00:18:47.837 Number of I/O Completion Queues: 127 00:18:47.837 00:18:47.837 Active Namespaces 00:18:47.837 ================= 00:18:47.837 Namespace ID:1 00:18:47.837 Error Recovery Timeout: Unlimited 00:18:47.837 Command Set Identifier: NVM (00h) 00:18:47.837 Deallocate: Supported 00:18:47.837 Deallocated/Unwritten Error: Not Supported 00:18:47.837 Deallocated Read Value: Unknown 00:18:47.837 Deallocate in Write Zeroes: Not Supported 00:18:47.837 Deallocated Guard Field: 0xFFFF 00:18:47.837 Flush: Supported 00:18:47.837 Reservation: Supported 00:18:47.837 Namespace Sharing Capabilities: Multiple Controllers 00:18:47.837 Size (in LBAs): 131072 (0GiB) 00:18:47.837 Capacity (in LBAs): 131072 (0GiB) 00:18:47.837 Utilization (in LBAs): 131072 (0GiB) 00:18:47.837 NGUID: CC309581D2DC4AE2A72FF0628F305562 00:18:47.837 UUID: cc309581-d2dc-4ae2-a72f-f0628f305562 00:18:47.837 Thin Provisioning: Not Supported 00:18:47.837 Per-NS Atomic Units: Yes 00:18:47.837 Atomic Boundary Size (Normal): 0 00:18:47.837 Atomic Boundary Size (PFail): 0 00:18:47.837 Atomic Boundary Offset: 0 00:18:47.837 Maximum Single Source Range Length: 65535 00:18:47.837 Maximum Copy Length: 65535 00:18:47.837 Maximum Source Range Count: 1 00:18:47.837 NGUID/EUI64 Never Reused: No 00:18:47.837 Namespace Write Protected: No 00:18:47.837 Number of LBA Formats: 1 00:18:47.837 Current LBA Format: LBA Format #00 00:18:47.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:47.837 00:18:47.837 22:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:47.837 [2024-10-12 22:07:06.275489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.124 Initializing NVMe Controllers 00:18:53.124 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.124 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:53.124 Initialization complete. Launching workers. 00:18:53.124 ======================================================== 00:18:53.124 Latency(us) 00:18:53.124 Device Information : IOPS MiB/s Average min max 00:18:53.124 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40008.96 156.28 3199.15 834.01 7782.12 00:18:53.124 ======================================================== 00:18:53.124 Total : 40008.96 156.28 3199.15 834.01 7782.12 00:18:53.124 00:18:53.124 [2024-10-12 22:07:11.378297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.124 22:07:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:53.124 [2024-10-12 22:07:11.561885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:58.412 Initializing NVMe Controllers 00:18:58.412 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:58.412 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:58.412 Initialization complete. Launching workers. 00:18:58.412 ======================================================== 00:18:58.412 Latency(us) 00:18:58.412 Device Information : IOPS MiB/s Average min max 00:18:58.412 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39982.82 156.18 3201.25 845.74 6821.98 00:18:58.412 ======================================================== 00:18:58.412 Total : 39982.82 156.18 3201.25 845.74 6821.98 00:18:58.412 00:18:58.412 [2024-10-12 22:07:16.579142] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:58.412 22:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:58.412 [2024-10-12 22:07:16.773492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.759 [2024-10-12 22:07:21.915193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.759 Initializing NVMe Controllers 00:19:03.759 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:03.759 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:03.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:03.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:03.759 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:03.759 Initialization complete. Launching workers. 00:19:03.759 Starting thread on core 2 00:19:03.759 Starting thread on core 3 00:19:03.759 Starting thread on core 1 00:19:03.759 22:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:03.759 [2024-10-12 22:07:22.149532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.061 [2024-10-12 22:07:25.211235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.061 Initializing NVMe Controllers 00:19:07.061 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.061 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.061 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:07.061 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:07.061 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:07.061 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:07.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:07.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:07.061 Initialization complete. Launching workers. 00:19:07.061 Starting thread on core 1 with urgent priority queue 00:19:07.061 Starting thread on core 2 with urgent priority queue 00:19:07.061 Starting thread on core 3 with urgent priority queue 00:19:07.061 Starting thread on core 0 with urgent priority queue 00:19:07.061 SPDK bdev Controller (SPDK2 ) core 0: 13106.00 IO/s 7.63 secs/100000 ios 00:19:07.061 SPDK bdev Controller (SPDK2 ) core 1: 12608.33 IO/s 7.93 secs/100000 ios 00:19:07.061 SPDK bdev Controller (SPDK2 ) core 2: 12526.00 IO/s 7.98 secs/100000 ios 00:19:07.061 SPDK bdev Controller (SPDK2 ) core 3: 9709.33 IO/s 10.30 secs/100000 ios 00:19:07.061 ======================================================== 00:19:07.061 00:19:07.061 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:07.061 [2024-10-12 22:07:25.437510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.061 Initializing NVMe Controllers 00:19:07.061 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.061 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:07.061 Namespace ID: 1 size: 0GB 00:19:07.061 Initialization complete. 00:19:07.061 INFO: using host memory buffer for IO 00:19:07.061 Hello world! 00:19:07.061 [2024-10-12 22:07:25.447582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.061 22:07:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:07.322 [2024-10-12 22:07:25.671490] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.282 Initializing NVMe Controllers 00:19:08.282 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.282 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.282 Initialization complete. Launching workers. 00:19:08.282 submit (in ns) avg, min, max = 5882.8, 2809.2, 3998294.2 00:19:08.282 complete (in ns) avg, min, max = 15741.2, 1673.3, 3997938.3 00:19:08.282 00:19:08.282 Submit histogram 00:19:08.282 ================ 00:19:08.282 Range in us Cumulative Count 00:19:08.282 2.800 - 2.813: 0.0292% ( 6) 00:19:08.282 2.813 - 2.827: 0.3651% ( 69) 00:19:08.282 2.827 - 2.840: 1.4993% ( 233) 00:19:08.282 2.840 - 2.853: 4.7948% ( 677) 00:19:08.282 2.853 - 2.867: 10.0570% ( 1081) 00:19:08.282 2.867 - 2.880: 15.0903% ( 1034) 00:19:08.282 2.880 - 2.893: 19.6417% ( 935) 00:19:08.282 2.893 - 2.907: 24.6410% ( 1027) 00:19:08.282 2.907 - 2.920: 30.1952% ( 1141) 00:19:08.282 2.920 - 2.933: 35.6180% ( 1114) 00:19:08.282 2.933 - 2.947: 40.8412% ( 1073) 00:19:08.282 2.947 - 2.960: 45.3877% ( 934) 00:19:08.282 2.960 - 2.973: 50.8543% ( 1123) 00:19:08.282 2.973 - 2.987: 59.5142% ( 1779) 00:19:08.282 2.987 - 3.000: 70.1455% ( 2184) 00:19:08.282 3.000 - 3.013: 79.6281% ( 1948) 00:19:08.282 3.013 - 3.027: 86.8033% ( 1474) 00:19:08.282 3.027 - 3.040: 91.7880% ( 1024) 00:19:08.282 3.040 - 3.053: 95.2052% ( 702) 00:19:08.282 3.053 - 3.067: 97.3324% ( 437) 00:19:08.282 3.067 - 3.080: 98.5737% ( 255) 00:19:08.282 3.080 - 3.093: 99.2796% ( 145) 00:19:08.282 3.093 - 3.107: 99.4889% ( 43) 00:19:08.282 3.107 - 3.120: 99.5522% ( 13) 00:19:08.282 3.120 - 3.133: 99.5716% ( 4) 00:19:08.282 3.133 - 3.147: 99.5765% ( 1) 00:19:08.282 3.147 - 3.160: 99.5814% ( 1) 00:19:08.282 3.173 - 3.187: 99.5862% ( 1) 00:19:08.282 3.200 - 3.213: 99.5911% ( 1) 00:19:08.282 3.360 - 3.373: 99.5960% ( 1) 00:19:08.282 3.493 - 3.520: 99.6057% ( 2) 00:19:08.282 3.520 - 3.547: 99.6106% ( 1) 00:19:08.282 3.573 - 3.600: 99.6154% ( 1) 00:19:08.282 3.787 - 3.813: 99.6203% ( 1) 00:19:08.282 3.920 - 3.947: 99.6252% ( 1) 00:19:08.282 4.107 - 4.133: 99.6300% ( 1) 00:19:08.282 4.133 - 4.160: 99.6398% ( 2) 00:19:08.282 4.187 - 4.213: 99.6495% ( 2) 00:19:08.282 4.293 - 4.320: 99.6544% ( 1) 00:19:08.282 4.373 - 4.400: 99.6593% ( 1) 00:19:08.282 4.587 - 4.613: 99.6641% ( 1) 00:19:08.282 4.747 - 4.773: 99.6739% ( 2) 00:19:08.282 4.800 - 4.827: 99.6787% ( 1) 00:19:08.282 4.827 - 4.853: 99.6885% ( 2) 00:19:08.282 4.880 - 4.907: 99.6933% ( 1) 00:19:08.282 4.907 - 4.933: 99.6982% ( 1) 00:19:08.282 4.933 - 4.960: 99.7031% ( 1) 00:19:08.282 4.960 - 4.987: 99.7079% ( 1) 00:19:08.282 4.987 - 5.013: 99.7177% ( 2) 00:19:08.282 5.013 - 5.040: 99.7225% ( 1) 00:19:08.282 5.067 - 5.093: 99.7323% ( 2) 00:19:08.282 5.093 - 5.120: 99.7371% ( 1) 00:19:08.282 5.173 - 5.200: 99.7420% ( 1) 00:19:08.282 5.307 - 5.333: 99.7566% ( 3) 00:19:08.282 5.333 - 5.360: 99.7615% ( 1) 00:19:08.282 5.360 - 5.387: 99.7712% ( 2) 00:19:08.282 5.387 - 5.413: 99.7761% ( 1) 00:19:08.282 5.493 - 5.520: 99.7809% ( 1) 00:19:08.282 5.547 - 5.573: 99.7858% ( 1) 00:19:08.282 5.573 - 5.600: 99.7956% ( 2) 00:19:08.282 5.760 - 5.787: 99.8004% ( 1) 00:19:08.282 5.787 - 5.813: 99.8053% ( 1) 00:19:08.282 5.867 - 5.893: 99.8102% ( 1) 00:19:08.282 5.893 - 5.920: 99.8150% ( 1) 00:19:08.282 5.920 - 5.947: 99.8199% ( 1) 00:19:08.282 5.947 - 5.973: 99.8248% ( 1) 00:19:08.282 5.973 - 6.000: 99.8296% ( 1) 00:19:08.282 6.000 - 6.027: 99.8345% ( 1) 00:19:08.282 6.053 - 6.080: 99.8442% ( 2) 00:19:08.282 6.107 - 6.133: 99.8540% ( 2) 00:19:08.282 6.133 - 6.160: 99.8588% ( 1) 00:19:08.282 6.267 - 6.293: 99.8637% ( 1) 00:19:08.282 6.293 - 6.320: 99.8686% ( 1) 00:19:08.282 6.320 - 6.347: 99.8734% ( 1) 00:19:08.282 6.373 - 6.400: 99.8783% ( 1) 00:19:08.282 6.400 - 6.427: 99.8832% ( 1) 00:19:08.282 6.427 - 6.453: 99.8880% ( 1) 00:19:08.282 6.533 - 6.560: 99.8929% ( 1) 00:19:08.282 6.720 - 6.747: 99.9026% ( 2) 00:19:08.282 [2024-10-12 22:07:26.766632] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:08.543 6.747 - 6.773: 99.9075% ( 1) 00:19:08.543 6.827 - 6.880: 99.9124% ( 1) 00:19:08.543 7.200 - 7.253: 99.9172% ( 1) 00:19:08.543 7.413 - 7.467: 99.9221% ( 1) 00:19:08.543 11.573 - 11.627: 99.9270% ( 1) 00:19:08.543 3986.773 - 4014.080: 100.0000% ( 15) 00:19:08.543 00:19:08.543 Complete histogram 00:19:08.543 ================== 00:19:08.543 Range in us Cumulative Count 00:19:08.543 1.673 - 1.680: 0.2629% ( 54) 00:19:08.543 1.680 - 1.687: 0.6864% ( 87) 00:19:08.543 1.687 - 1.693: 0.8081% ( 25) 00:19:08.543 1.693 - 1.700: 0.9444% ( 28) 00:19:08.543 1.700 - 1.707: 2.5605% ( 332) 00:19:08.543 1.707 - 1.720: 46.5365% ( 9034) 00:19:08.543 1.720 - 1.733: 69.5371% ( 4725) 00:19:08.543 1.733 - 1.747: 80.5189% ( 2256) 00:19:08.543 1.747 - 1.760: 83.3958% ( 591) 00:19:08.543 1.760 - 1.773: 86.1510% ( 566) 00:19:08.543 1.773 - 1.787: 90.9458% ( 985) 00:19:08.543 1.787 - 1.800: 95.5119% ( 938) 00:19:08.543 1.800 - 1.813: 98.1648% ( 545) 00:19:08.543 1.813 - 1.827: 99.1579% ( 204) 00:19:08.543 1.827 - 1.840: 99.4451% ( 59) 00:19:08.543 1.840 - 1.853: 99.5035% ( 12) 00:19:08.543 2.107 - 2.120: 99.5083% ( 1) 00:19:08.543 3.493 - 3.520: 99.5132% ( 1) 00:19:08.543 3.680 - 3.707: 99.5230% ( 2) 00:19:08.543 3.813 - 3.840: 99.5278% ( 1) 00:19:08.543 3.893 - 3.920: 99.5327% ( 1) 00:19:08.543 4.053 - 4.080: 99.5376% ( 1) 00:19:08.543 4.080 - 4.107: 99.5473% ( 2) 00:19:08.543 4.107 - 4.133: 99.5522% ( 1) 00:19:08.543 4.213 - 4.240: 99.5570% ( 1) 00:19:08.543 4.240 - 4.267: 99.5619% ( 1) 00:19:08.543 4.400 - 4.427: 99.5668% ( 1) 00:19:08.543 4.453 - 4.480: 99.5716% ( 1) 00:19:08.543 4.613 - 4.640: 99.5765% ( 1) 00:19:08.543 4.640 - 4.667: 99.5814% ( 1) 00:19:08.543 4.667 - 4.693: 99.5862% ( 1) 00:19:08.543 4.720 - 4.747: 99.5911% ( 1) 00:19:08.543 4.800 - 4.827: 99.5960% ( 1) 00:19:08.543 4.880 - 4.907: 99.6008% ( 1) 00:19:08.543 4.907 - 4.933: 99.6057% ( 1) 00:19:08.543 4.987 - 5.013: 99.6106% ( 1) 00:19:08.543 5.013 - 5.040: 99.6154% ( 1) 00:19:08.543 5.200 - 5.227: 99.6203% ( 1) 00:19:08.543 5.413 - 5.440: 99.6252% ( 1) 00:19:08.543 9.653 - 9.707: 99.6300% ( 1) 00:19:08.543 9.920 - 9.973: 99.6349% ( 1) 00:19:08.543 11.147 - 11.200: 99.6398% ( 1) 00:19:08.543 67.413 - 67.840: 99.6446% ( 1) 00:19:08.543 81.067 - 81.493: 99.6495% ( 1) 00:19:08.543 3986.773 - 4014.080: 100.0000% ( 72) 00:19:08.543 00:19:08.543 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:08.543 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:08.543 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:08.543 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:08.543 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:08.543 [ 00:19:08.543 { 00:19:08.543 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:08.543 "subtype": "Discovery", 00:19:08.543 "listen_addresses": [], 00:19:08.543 "allow_any_host": true, 00:19:08.543 "hosts": [] 00:19:08.543 }, 00:19:08.543 { 00:19:08.543 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:08.543 "subtype": "NVMe", 00:19:08.543 "listen_addresses": [ 00:19:08.543 { 00:19:08.543 "trtype": "VFIOUSER", 00:19:08.543 "adrfam": "IPv4", 00:19:08.543 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:08.543 "trsvcid": "0" 00:19:08.543 } 00:19:08.543 ], 00:19:08.543 "allow_any_host": true, 00:19:08.543 "hosts": [], 00:19:08.543 "serial_number": "SPDK1", 00:19:08.543 "model_number": "SPDK bdev Controller", 00:19:08.543 "max_namespaces": 32, 00:19:08.543 "min_cntlid": 1, 00:19:08.543 "max_cntlid": 65519, 00:19:08.543 "namespaces": [ 00:19:08.543 { 00:19:08.543 "nsid": 1, 00:19:08.543 "bdev_name": "Malloc1", 00:19:08.543 "name": "Malloc1", 00:19:08.543 "nguid": "3B3D4455E7F045EB8E36245DF376C152", 00:19:08.543 "uuid": "3b3d4455-e7f0-45eb-8e36-245df376c152" 00:19:08.543 }, 00:19:08.543 { 00:19:08.543 "nsid": 2, 00:19:08.543 "bdev_name": "Malloc3", 00:19:08.543 "name": "Malloc3", 00:19:08.543 "nguid": "52F8B60F43154E0DB7809F622B9CDADE", 00:19:08.543 "uuid": "52f8b60f-4315-4e0d-b780-9f622b9cdade" 00:19:08.543 } 00:19:08.543 ] 00:19:08.543 }, 00:19:08.543 { 00:19:08.543 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:08.543 "subtype": "NVMe", 00:19:08.543 "listen_addresses": [ 00:19:08.543 { 00:19:08.543 "trtype": "VFIOUSER", 00:19:08.543 "adrfam": "IPv4", 00:19:08.543 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:08.543 "trsvcid": "0" 00:19:08.543 } 00:19:08.543 ], 00:19:08.543 "allow_any_host": true, 00:19:08.543 "hosts": [], 00:19:08.543 "serial_number": "SPDK2", 00:19:08.543 "model_number": "SPDK bdev Controller", 00:19:08.543 "max_namespaces": 32, 00:19:08.543 "min_cntlid": 1, 00:19:08.543 "max_cntlid": 65519, 00:19:08.543 "namespaces": [ 00:19:08.543 { 00:19:08.543 "nsid": 1, 00:19:08.543 "bdev_name": "Malloc2", 00:19:08.543 "name": "Malloc2", 00:19:08.543 "nguid": "CC309581D2DC4AE2A72FF0628F305562", 00:19:08.543 "uuid": "cc309581-d2dc-4ae2-a72f-f0628f305562" 00:19:08.544 } 00:19:08.544 ] 00:19:08.544 } 00:19:08.544 ] 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3493329 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:08.544 22:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:08.804 [2024-10-12 22:07:27.125372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.805 Malloc4 00:19:08.805 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:09.065 [2024-10-12 22:07:27.335849] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:09.065 Asynchronous Event Request test 00:19:09.065 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:09.065 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:09.065 Registering asynchronous event callbacks... 00:19:09.065 Starting namespace attribute notice tests for all controllers... 00:19:09.065 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:09.065 aer_cb - Changed Namespace 00:19:09.065 Cleaning up... 00:19:09.065 [ 00:19:09.065 { 00:19:09.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:09.065 "subtype": "Discovery", 00:19:09.065 "listen_addresses": [], 00:19:09.065 "allow_any_host": true, 00:19:09.065 "hosts": [] 00:19:09.065 }, 00:19:09.065 { 00:19:09.065 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:09.065 "subtype": "NVMe", 00:19:09.065 "listen_addresses": [ 00:19:09.065 { 00:19:09.065 "trtype": "VFIOUSER", 00:19:09.065 "adrfam": "IPv4", 00:19:09.065 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:09.065 "trsvcid": "0" 00:19:09.065 } 00:19:09.065 ], 00:19:09.065 "allow_any_host": true, 00:19:09.065 "hosts": [], 00:19:09.065 "serial_number": "SPDK1", 00:19:09.065 "model_number": "SPDK bdev Controller", 00:19:09.065 "max_namespaces": 32, 00:19:09.065 "min_cntlid": 1, 00:19:09.065 "max_cntlid": 65519, 00:19:09.065 "namespaces": [ 00:19:09.065 { 00:19:09.065 "nsid": 1, 00:19:09.065 "bdev_name": "Malloc1", 00:19:09.065 "name": "Malloc1", 00:19:09.065 "nguid": "3B3D4455E7F045EB8E36245DF376C152", 00:19:09.065 "uuid": "3b3d4455-e7f0-45eb-8e36-245df376c152" 00:19:09.065 }, 00:19:09.065 { 00:19:09.065 "nsid": 2, 00:19:09.065 "bdev_name": "Malloc3", 00:19:09.065 "name": "Malloc3", 00:19:09.065 "nguid": "52F8B60F43154E0DB7809F622B9CDADE", 00:19:09.065 "uuid": "52f8b60f-4315-4e0d-b780-9f622b9cdade" 00:19:09.065 } 00:19:09.065 ] 00:19:09.065 }, 00:19:09.065 { 00:19:09.065 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:09.065 "subtype": "NVMe", 00:19:09.065 "listen_addresses": [ 00:19:09.065 { 00:19:09.065 "trtype": "VFIOUSER", 00:19:09.065 "adrfam": "IPv4", 00:19:09.065 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:09.065 "trsvcid": "0" 00:19:09.065 } 00:19:09.065 ], 00:19:09.065 "allow_any_host": true, 00:19:09.065 "hosts": [], 00:19:09.065 "serial_number": "SPDK2", 00:19:09.065 "model_number": "SPDK bdev Controller", 00:19:09.065 "max_namespaces": 32, 00:19:09.065 "min_cntlid": 1, 00:19:09.065 "max_cntlid": 65519, 00:19:09.065 "namespaces": [ 00:19:09.065 { 00:19:09.065 "nsid": 1, 00:19:09.065 "bdev_name": "Malloc2", 00:19:09.065 "name": "Malloc2", 00:19:09.065 "nguid": "CC309581D2DC4AE2A72FF0628F305562", 00:19:09.065 "uuid": "cc309581-d2dc-4ae2-a72f-f0628f305562" 00:19:09.065 }, 00:19:09.065 { 00:19:09.065 "nsid": 2, 00:19:09.065 "bdev_name": "Malloc4", 00:19:09.065 "name": "Malloc4", 00:19:09.065 "nguid": "174DDDCFA0C24ACBA1A5F09712936772", 00:19:09.065 "uuid": "174dddcf-a0c2-4acb-a1a5-f09712936772" 00:19:09.065 } 00:19:09.065 ] 00:19:09.065 } 00:19:09.065 ] 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3493329 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3484260 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3484260 ']' 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3484260 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.065 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3484260 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3484260' 00:19:09.326 killing process with pid 3484260 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3484260 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3484260 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3493355 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3493355' 00:19:09.326 Process pid: 3493355 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3493355 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3493355 ']' 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.326 22:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:09.326 [2024-10-12 22:07:27.809894] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:09.326 [2024-10-12 22:07:27.810841] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:09.326 [2024-10-12 22:07:27.810883] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.588 [2024-10-12 22:07:27.888963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.588 [2024-10-12 22:07:27.917492] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.588 [2024-10-12 22:07:27.917527] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.588 [2024-10-12 22:07:27.917533] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.588 [2024-10-12 22:07:27.917538] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.588 [2024-10-12 22:07:27.917542] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.588 [2024-10-12 22:07:27.917678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.588 [2024-10-12 22:07:27.917833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.588 [2024-10-12 22:07:27.917988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.588 [2024-10-12 22:07:27.917990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.588 [2024-10-12 22:07:27.975132] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:09.588 [2024-10-12 22:07:27.976371] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:09.588 [2024-10-12 22:07:27.976873] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:09.588 [2024-10-12 22:07:27.977359] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:09.588 [2024-10-12 22:07:27.977390] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:10.158 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.158 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:10.158 22:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:11.542 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:11.542 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:11.542 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:11.542 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:11.543 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:11.543 22:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:11.543 Malloc1 00:19:11.804 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:11.804 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:12.064 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:12.324 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:12.324 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:12.324 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:12.585 Malloc2 00:19:12.585 22:07:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:12.585 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:12.845 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:13.106 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:13.106 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3493355 00:19:13.106 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3493355 ']' 00:19:13.106 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3493355 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3493355 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3493355' 00:19:13.107 killing process with pid 3493355 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3493355 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3493355 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:13.107 00:19:13.107 real 0m50.809s 00:19:13.107 user 3m14.535s 00:19:13.107 sys 0m2.699s 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:13.107 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:13.107 ************************************ 00:19:13.107 END TEST nvmf_vfio_user 00:19:13.107 ************************************ 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.369 ************************************ 00:19:13.369 START TEST nvmf_vfio_user_nvme_compliance 00:19:13.369 ************************************ 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:13.369 * Looking for test storage... 00:19:13.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:13.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.369 --rc genhtml_branch_coverage=1 00:19:13.369 --rc genhtml_function_coverage=1 00:19:13.369 --rc genhtml_legend=1 00:19:13.369 --rc geninfo_all_blocks=1 00:19:13.369 --rc geninfo_unexecuted_blocks=1 00:19:13.369 00:19:13.369 ' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:13.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.369 --rc genhtml_branch_coverage=1 00:19:13.369 --rc genhtml_function_coverage=1 00:19:13.369 --rc genhtml_legend=1 00:19:13.369 --rc geninfo_all_blocks=1 00:19:13.369 --rc geninfo_unexecuted_blocks=1 00:19:13.369 00:19:13.369 ' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:13.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.369 --rc genhtml_branch_coverage=1 00:19:13.369 --rc genhtml_function_coverage=1 00:19:13.369 --rc genhtml_legend=1 00:19:13.369 --rc geninfo_all_blocks=1 00:19:13.369 --rc geninfo_unexecuted_blocks=1 00:19:13.369 00:19:13.369 ' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:13.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.369 --rc genhtml_branch_coverage=1 00:19:13.369 --rc genhtml_function_coverage=1 00:19:13.369 --rc genhtml_legend=1 00:19:13.369 --rc geninfo_all_blocks=1 00:19:13.369 --rc geninfo_unexecuted_blocks=1 00:19:13.369 00:19:13.369 ' 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.369 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3494301 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3494301' 00:19:13.631 Process pid: 3494301 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3494301 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3494301 ']' 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.631 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.632 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.632 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.632 22:07:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.632 [2024-10-12 22:07:31.937274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:13.632 [2024-10-12 22:07:31.937323] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.632 [2024-10-12 22:07:32.007726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.632 [2024-10-12 22:07:32.036173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.632 [2024-10-12 22:07:32.036208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.632 [2024-10-12 22:07:32.036215] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.632 [2024-10-12 22:07:32.036220] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.632 [2024-10-12 22:07:32.036225] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.632 [2024-10-12 22:07:32.036413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.632 [2024-10-12 22:07:32.036623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.632 [2024-10-12 22:07:32.036624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.632 22:07:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.632 22:07:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:13.632 22:07:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:15.015 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:15.015 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:15.016 malloc0 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.016 22:07:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:15.016 00:19:15.016 00:19:15.016 CUnit - A unit testing framework for C - Version 2.1-3 00:19:15.016 http://cunit.sourceforge.net/ 00:19:15.016 00:19:15.016 00:19:15.016 Suite: nvme_compliance 00:19:15.016 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-12 22:07:33.342520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.016 [2024-10-12 22:07:33.343822] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:15.016 [2024-10-12 22:07:33.343834] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:15.016 [2024-10-12 22:07:33.343839] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:15.016 [2024-10-12 22:07:33.345539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.016 passed 00:19:15.016 Test: admin_identify_ctrlr_verify_fused ...[2024-10-12 22:07:33.421029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.016 [2024-10-12 22:07:33.424046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.016 passed 00:19:15.016 Test: admin_identify_ns ...[2024-10-12 22:07:33.499521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.277 [2024-10-12 22:07:33.560111] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:15.277 [2024-10-12 22:07:33.568113] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:15.277 [2024-10-12 22:07:33.589198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.277 passed 00:19:15.277 Test: admin_get_features_mandatory_features ...[2024-10-12 22:07:33.662410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.277 [2024-10-12 22:07:33.665422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.277 passed 00:19:15.277 Test: admin_get_features_optional_features ...[2024-10-12 22:07:33.741875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.277 [2024-10-12 22:07:33.744890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.537 passed 00:19:15.537 Test: admin_set_features_number_of_queues ...[2024-10-12 22:07:33.822477] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.537 [2024-10-12 22:07:33.928192] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.537 passed 00:19:15.537 Test: admin_get_log_page_mandatory_logs ...[2024-10-12 22:07:34.003408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.537 [2024-10-12 22:07:34.006432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.797 passed 00:19:15.797 Test: admin_get_log_page_with_lpo ...[2024-10-12 22:07:34.079449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.797 [2024-10-12 22:07:34.151112] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:15.797 [2024-10-12 22:07:34.164157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.797 passed 00:19:15.797 Test: fabric_property_get ...[2024-10-12 22:07:34.237386] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:15.797 [2024-10-12 22:07:34.238588] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:15.797 [2024-10-12 22:07:34.240406] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:15.797 passed 00:19:16.057 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-12 22:07:34.316880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.057 [2024-10-12 22:07:34.318077] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:16.057 [2024-10-12 22:07:34.319900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.057 passed 00:19:16.057 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-12 22:07:34.395619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.057 [2024-10-12 22:07:34.480109] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:16.057 [2024-10-12 22:07:34.496105] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:16.057 [2024-10-12 22:07:34.501177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.057 passed 00:19:16.318 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-12 22:07:34.574360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.318 [2024-10-12 22:07:34.575570] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:16.318 [2024-10-12 22:07:34.577378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.318 passed 00:19:16.318 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-12 22:07:34.652469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.318 [2024-10-12 22:07:34.728107] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:16.318 [2024-10-12 22:07:34.752109] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:16.318 [2024-10-12 22:07:34.757172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.318 passed 00:19:16.579 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-12 22:07:34.833236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.579 [2024-10-12 22:07:34.834437] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:16.579 [2024-10-12 22:07:34.834456] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:16.579 [2024-10-12 22:07:34.836253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.579 passed 00:19:16.579 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-12 22:07:34.913480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.579 [2024-10-12 22:07:35.006107] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:16.579 [2024-10-12 22:07:35.014108] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:16.579 [2024-10-12 22:07:35.022108] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:16.579 [2024-10-12 22:07:35.030109] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:16.579 [2024-10-12 22:07:35.059180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.840 passed 00:19:16.840 Test: admin_create_io_sq_verify_pc ...[2024-10-12 22:07:35.131416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.840 [2024-10-12 22:07:35.148115] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:16.840 [2024-10-12 22:07:35.165578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.840 passed 00:19:16.840 Test: admin_create_io_qp_max_qps ...[2024-10-12 22:07:35.244045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.223 [2024-10-12 22:07:36.343110] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:18.483 [2024-10-12 22:07:36.739400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.483 passed 00:19:18.483 Test: admin_create_io_sq_shared_cq ...[2024-10-12 22:07:36.814475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.483 [2024-10-12 22:07:36.946108] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:18.744 [2024-10-12 22:07:36.983146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.744 passed 00:19:18.744 00:19:18.744 Run Summary: Type Total Ran Passed Failed Inactive 00:19:18.744 suites 1 1 n/a 0 0 00:19:18.744 tests 18 18 18 0 0 00:19:18.744 asserts 360 360 360 0 n/a 00:19:18.744 00:19:18.744 Elapsed time = 1.498 seconds 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3494301 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3494301 ']' 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3494301 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3494301 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:18.744 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3494301' 00:19:18.745 killing process with pid 3494301 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3494301 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3494301 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:18.745 00:19:18.745 real 0m5.571s 00:19:18.745 user 0m15.560s 00:19:18.745 sys 0m0.515s 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.745 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:18.745 ************************************ 00:19:18.745 END TEST nvmf_vfio_user_nvme_compliance 00:19:18.745 ************************************ 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:19.006 ************************************ 00:19:19.006 START TEST nvmf_vfio_user_fuzz 00:19:19.006 ************************************ 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:19.006 * Looking for test storage... 00:19:19.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.006 --rc genhtml_branch_coverage=1 00:19:19.006 --rc genhtml_function_coverage=1 00:19:19.006 --rc genhtml_legend=1 00:19:19.006 --rc geninfo_all_blocks=1 00:19:19.006 --rc geninfo_unexecuted_blocks=1 00:19:19.006 00:19:19.006 ' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.006 --rc genhtml_branch_coverage=1 00:19:19.006 --rc genhtml_function_coverage=1 00:19:19.006 --rc genhtml_legend=1 00:19:19.006 --rc geninfo_all_blocks=1 00:19:19.006 --rc geninfo_unexecuted_blocks=1 00:19:19.006 00:19:19.006 ' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.006 --rc genhtml_branch_coverage=1 00:19:19.006 --rc genhtml_function_coverage=1 00:19:19.006 --rc genhtml_legend=1 00:19:19.006 --rc geninfo_all_blocks=1 00:19:19.006 --rc geninfo_unexecuted_blocks=1 00:19:19.006 00:19:19.006 ' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:19.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.006 --rc genhtml_branch_coverage=1 00:19:19.006 --rc genhtml_function_coverage=1 00:19:19.006 --rc genhtml_legend=1 00:19:19.006 --rc geninfo_all_blocks=1 00:19:19.006 --rc geninfo_unexecuted_blocks=1 00:19:19.006 00:19:19.006 ' 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.006 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.268 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3495497 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3495497' 00:19:19.269 Process pid: 3495497 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3495497 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3495497 ']' 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.269 22:07:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:20.209 22:07:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.209 22:07:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:20.209 22:07:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.149 malloc0 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:21.149 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:21.150 22:07:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:53.397 Fuzzing completed. Shutting down the fuzz application 00:19:53.397 00:19:53.397 Dumping successful admin opcodes: 00:19:53.397 8, 9, 10, 24, 00:19:53.397 Dumping successful io opcodes: 00:19:53.397 0, 00:19:53.397 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1237196, total successful commands: 4854, random_seed: 2853199168 00:19:53.397 NS: 0x200003a1ef00 admin qp, Total commands completed: 269509, total successful commands: 2170, random_seed: 686487168 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3495497 ']' 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3495497' 00:19:53.397 killing process with pid 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3495497 00:19:53.397 22:08:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:53.397 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:53.397 00:19:53.397 real 0m32.805s 00:19:53.397 user 0m35.460s 00:19:53.397 sys 0m25.906s 00:19:53.397 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:53.397 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:53.397 ************************************ 00:19:53.397 END TEST nvmf_vfio_user_fuzz 00:19:53.397 ************************************ 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.398 ************************************ 00:19:53.398 START TEST nvmf_auth_target 00:19:53.398 ************************************ 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:53.398 * Looking for test storage... 00:19:53.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.398 --rc genhtml_branch_coverage=1 00:19:53.398 --rc genhtml_function_coverage=1 00:19:53.398 --rc genhtml_legend=1 00:19:53.398 --rc geninfo_all_blocks=1 00:19:53.398 --rc geninfo_unexecuted_blocks=1 00:19:53.398 00:19:53.398 ' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.398 --rc genhtml_branch_coverage=1 00:19:53.398 --rc genhtml_function_coverage=1 00:19:53.398 --rc genhtml_legend=1 00:19:53.398 --rc geninfo_all_blocks=1 00:19:53.398 --rc geninfo_unexecuted_blocks=1 00:19:53.398 00:19:53.398 ' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.398 --rc genhtml_branch_coverage=1 00:19:53.398 --rc genhtml_function_coverage=1 00:19:53.398 --rc genhtml_legend=1 00:19:53.398 --rc geninfo_all_blocks=1 00:19:53.398 --rc geninfo_unexecuted_blocks=1 00:19:53.398 00:19:53.398 ' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:53.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.398 --rc genhtml_branch_coverage=1 00:19:53.398 --rc genhtml_function_coverage=1 00:19:53.398 --rc genhtml_legend=1 00:19:53.398 --rc geninfo_all_blocks=1 00:19:53.398 --rc geninfo_unexecuted_blocks=1 00:19:53.398 00:19:53.398 ' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.398 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.399 22:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:59.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:59.988 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:59.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.988 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:59.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:19:59.989 00:19:59.989 --- 10.0.0.2 ping statistics --- 00:19:59.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.989 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:19:59.989 00:19:59.989 --- 10.0.0.1 ping statistics --- 00:19:59.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.989 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3505471 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3505471 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3505471 ']' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:59.989 22:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3505684 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6d5f4195ccf6e78b5f91262fed19dfbae7394f5b4012c8d1 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.fUM 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6d5f4195ccf6e78b5f91262fed19dfbae7394f5b4012c8d1 0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6d5f4195ccf6e78b5f91262fed19dfbae7394f5b4012c8d1 0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6d5f4195ccf6e78b5f91262fed19dfbae7394f5b4012c8d1 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.fUM 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.fUM 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.fUM 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a046f2dda036a56667148211db8685565da45ab84f08c1889b1d7d29042186c0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.EpH 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a046f2dda036a56667148211db8685565da45ab84f08c1889b1d7d29042186c0 3 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a046f2dda036a56667148211db8685565da45ab84f08c1889b1d7d29042186c0 3 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a046f2dda036a56667148211db8685565da45ab84f08c1889b1d7d29042186c0 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.EpH 00:20:00.562 22:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.EpH 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.EpH 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b2fb0ed5ded52fee8de876a642921626 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.nvB 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b2fb0ed5ded52fee8de876a642921626 1 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b2fb0ed5ded52fee8de876a642921626 1 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b2fb0ed5ded52fee8de876a642921626 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:00.562 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.nvB 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.nvB 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nvB 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=c12a381a2c96cf03f682cca19ba50cf6220734732263e1fa 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.FJt 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key c12a381a2c96cf03f682cca19ba50cf6220734732263e1fa 2 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 c12a381a2c96cf03f682cca19ba50cf6220734732263e1fa 2 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.823 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=c12a381a2c96cf03f682cca19ba50cf6220734732263e1fa 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.FJt 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.FJt 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.FJt 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=efb6a189e1cb69289c702043e47bd90b1a6daa9a61a34f84 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.9nm 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key efb6a189e1cb69289c702043e47bd90b1a6daa9a61a34f84 2 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 efb6a189e1cb69289c702043e47bd90b1a6daa9a61a34f84 2 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=efb6a189e1cb69289c702043e47bd90b1a6daa9a61a34f84 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.9nm 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.9nm 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9nm 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=3812e6dc3c9df64818a936d6a915d7b6 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Ioh 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 3812e6dc3c9df64818a936d6a915d7b6 1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 3812e6dc3c9df64818a936d6a915d7b6 1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=3812e6dc3c9df64818a936d6a915d7b6 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Ioh 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Ioh 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Ioh 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9cfb909816a33419852123855925c537b80c4efab7c4738c5bda5c94d6c4ad82 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.a3T 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9cfb909816a33419852123855925c537b80c4efab7c4738c5bda5c94d6c4ad82 3 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9cfb909816a33419852123855925c537b80c4efab7c4738c5bda5c94d6c4ad82 3 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9cfb909816a33419852123855925c537b80c4efab7c4738c5bda5c94d6c4ad82 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:20:00.824 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.a3T 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.a3T 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.a3T 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3505471 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3505471 ']' 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3505684 /var/tmp/host.sock 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3505684 ']' 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:01.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.085 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.346 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fUM 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fUM 00:20:01.347 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fUM 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.EpH ]] 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EpH 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EpH 00:20:01.607 22:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EpH 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nvB 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nvB 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nvB 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.FJt ]] 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FJt 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FJt 00:20:01.868 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FJt 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9nm 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9nm 00:20:02.129 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9nm 00:20:02.390 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Ioh ]] 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ioh 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ioh 00:20:02.391 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ioh 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.a3T 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.a3T 00:20:02.652 22:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.a3T 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.652 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.913 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.174 00:20:03.174 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.174 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.174 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.434 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.434 { 00:20:03.435 "cntlid": 1, 00:20:03.435 "qid": 0, 00:20:03.435 "state": "enabled", 00:20:03.435 "thread": "nvmf_tgt_poll_group_000", 00:20:03.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.435 "listen_address": { 00:20:03.435 "trtype": "TCP", 00:20:03.435 "adrfam": "IPv4", 00:20:03.435 "traddr": "10.0.0.2", 00:20:03.435 "trsvcid": "4420" 00:20:03.435 }, 00:20:03.435 "peer_address": { 00:20:03.435 "trtype": "TCP", 00:20:03.435 "adrfam": "IPv4", 00:20:03.435 "traddr": "10.0.0.1", 00:20:03.435 "trsvcid": "41152" 00:20:03.435 }, 00:20:03.435 "auth": { 00:20:03.435 "state": "completed", 00:20:03.435 "digest": "sha256", 00:20:03.435 "dhgroup": "null" 00:20:03.435 } 00:20:03.435 } 00:20:03.435 ]' 00:20:03.435 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.435 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.435 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.435 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.435 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.696 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.696 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.696 22:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.696 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:03.696 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.637 22:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.637 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.637 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.637 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.637 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.898 00:20:04.898 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.898 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.898 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.158 { 00:20:05.158 "cntlid": 3, 00:20:05.158 "qid": 0, 00:20:05.158 "state": "enabled", 00:20:05.158 "thread": "nvmf_tgt_poll_group_000", 00:20:05.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.158 "listen_address": { 00:20:05.158 "trtype": "TCP", 00:20:05.158 "adrfam": "IPv4", 00:20:05.158 "traddr": "10.0.0.2", 00:20:05.158 "trsvcid": "4420" 00:20:05.158 }, 00:20:05.158 "peer_address": { 00:20:05.158 "trtype": "TCP", 00:20:05.158 "adrfam": "IPv4", 00:20:05.158 "traddr": "10.0.0.1", 00:20:05.158 "trsvcid": "41184" 00:20:05.158 }, 00:20:05.158 "auth": { 00:20:05.158 "state": "completed", 00:20:05.158 "digest": "sha256", 00:20:05.158 "dhgroup": "null" 00:20:05.158 } 00:20:05.158 } 00:20:05.158 ]' 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.158 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.419 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:05.419 22:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.989 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.990 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.250 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.510 00:20:06.510 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.510 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.510 22:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.770 { 00:20:06.770 "cntlid": 5, 00:20:06.770 "qid": 0, 00:20:06.770 "state": "enabled", 00:20:06.770 "thread": "nvmf_tgt_poll_group_000", 00:20:06.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.770 "listen_address": { 00:20:06.770 "trtype": "TCP", 00:20:06.770 "adrfam": "IPv4", 00:20:06.770 "traddr": "10.0.0.2", 00:20:06.770 "trsvcid": "4420" 00:20:06.770 }, 00:20:06.770 "peer_address": { 00:20:06.770 "trtype": "TCP", 00:20:06.770 "adrfam": "IPv4", 00:20:06.770 "traddr": "10.0.0.1", 00:20:06.770 "trsvcid": "41210" 00:20:06.770 }, 00:20:06.770 "auth": { 00:20:06.770 "state": "completed", 00:20:06.770 "digest": "sha256", 00:20:06.770 "dhgroup": "null" 00:20:06.770 } 00:20:06.770 } 00:20:06.770 ]' 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.770 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.030 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:07.030 22:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.600 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.861 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.122 00:20:08.122 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.122 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.122 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.382 { 00:20:08.382 "cntlid": 7, 00:20:08.382 "qid": 0, 00:20:08.382 "state": "enabled", 00:20:08.382 "thread": "nvmf_tgt_poll_group_000", 00:20:08.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:08.382 "listen_address": { 00:20:08.382 "trtype": "TCP", 00:20:08.382 "adrfam": "IPv4", 00:20:08.382 "traddr": "10.0.0.2", 00:20:08.382 "trsvcid": "4420" 00:20:08.382 }, 00:20:08.382 "peer_address": { 00:20:08.382 "trtype": "TCP", 00:20:08.382 "adrfam": "IPv4", 00:20:08.382 "traddr": "10.0.0.1", 00:20:08.382 "trsvcid": "41230" 00:20:08.382 }, 00:20:08.382 "auth": { 00:20:08.382 "state": "completed", 00:20:08.382 "digest": "sha256", 00:20:08.382 "dhgroup": "null" 00:20:08.382 } 00:20:08.382 } 00:20:08.382 ]' 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.382 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.642 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:08.642 22:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.214 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.475 22:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.735 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.735 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.995 { 00:20:09.995 "cntlid": 9, 00:20:09.995 "qid": 0, 00:20:09.995 "state": "enabled", 00:20:09.995 "thread": "nvmf_tgt_poll_group_000", 00:20:09.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:09.995 "listen_address": { 00:20:09.995 "trtype": "TCP", 00:20:09.995 "adrfam": "IPv4", 00:20:09.995 "traddr": "10.0.0.2", 00:20:09.995 "trsvcid": "4420" 00:20:09.995 }, 00:20:09.995 "peer_address": { 00:20:09.995 "trtype": "TCP", 00:20:09.995 "adrfam": "IPv4", 00:20:09.995 "traddr": "10.0.0.1", 00:20:09.995 "trsvcid": "41260" 00:20:09.995 }, 00:20:09.995 "auth": { 00:20:09.995 "state": "completed", 00:20:09.995 "digest": "sha256", 00:20:09.995 "dhgroup": "ffdhe2048" 00:20:09.995 } 00:20:09.995 } 00:20:09.995 ]' 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.995 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.255 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:10.255 22:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.827 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.087 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.347 00:20:11.347 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.347 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.347 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.608 { 00:20:11.608 "cntlid": 11, 00:20:11.608 "qid": 0, 00:20:11.608 "state": "enabled", 00:20:11.608 "thread": "nvmf_tgt_poll_group_000", 00:20:11.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.608 "listen_address": { 00:20:11.608 "trtype": "TCP", 00:20:11.608 "adrfam": "IPv4", 00:20:11.608 "traddr": "10.0.0.2", 00:20:11.608 "trsvcid": "4420" 00:20:11.608 }, 00:20:11.608 "peer_address": { 00:20:11.608 "trtype": "TCP", 00:20:11.608 "adrfam": "IPv4", 00:20:11.608 "traddr": "10.0.0.1", 00:20:11.608 "trsvcid": "41298" 00:20:11.608 }, 00:20:11.608 "auth": { 00:20:11.608 "state": "completed", 00:20:11.608 "digest": "sha256", 00:20:11.608 "dhgroup": "ffdhe2048" 00:20:11.608 } 00:20:11.608 } 00:20:11.608 ]' 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.608 22:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.608 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.608 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.608 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.868 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:11.868 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.439 22:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.700 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.960 00:20:12.960 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.960 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.960 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.220 { 00:20:13.220 "cntlid": 13, 00:20:13.220 "qid": 0, 00:20:13.220 "state": "enabled", 00:20:13.220 "thread": "nvmf_tgt_poll_group_000", 00:20:13.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:13.220 "listen_address": { 00:20:13.220 "trtype": "TCP", 00:20:13.220 "adrfam": "IPv4", 00:20:13.220 "traddr": "10.0.0.2", 00:20:13.220 "trsvcid": "4420" 00:20:13.220 }, 00:20:13.220 "peer_address": { 00:20:13.220 "trtype": "TCP", 00:20:13.220 "adrfam": "IPv4", 00:20:13.220 "traddr": "10.0.0.1", 00:20:13.220 "trsvcid": "60890" 00:20:13.220 }, 00:20:13.220 "auth": { 00:20:13.220 "state": "completed", 00:20:13.220 "digest": "sha256", 00:20:13.220 "dhgroup": "ffdhe2048" 00:20:13.220 } 00:20:13.220 } 00:20:13.220 ]' 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.220 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.480 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:13.480 22:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.051 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.313 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.574 00:20:14.574 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.574 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.574 22:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.574 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.834 { 00:20:14.834 "cntlid": 15, 00:20:14.834 "qid": 0, 00:20:14.834 "state": "enabled", 00:20:14.834 "thread": "nvmf_tgt_poll_group_000", 00:20:14.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.834 "listen_address": { 00:20:14.834 "trtype": "TCP", 00:20:14.834 "adrfam": "IPv4", 00:20:14.834 "traddr": "10.0.0.2", 00:20:14.834 "trsvcid": "4420" 00:20:14.834 }, 00:20:14.834 "peer_address": { 00:20:14.834 "trtype": "TCP", 00:20:14.834 "adrfam": "IPv4", 00:20:14.834 "traddr": "10.0.0.1", 00:20:14.834 "trsvcid": "60906" 00:20:14.834 }, 00:20:14.834 "auth": { 00:20:14.834 "state": "completed", 00:20:14.834 "digest": "sha256", 00:20:14.834 "dhgroup": "ffdhe2048" 00:20:14.834 } 00:20:14.834 } 00:20:14.834 ]' 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.834 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.095 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:15.095 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:15.666 22:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.666 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.928 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.188 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.188 { 00:20:16.188 "cntlid": 17, 00:20:16.188 "qid": 0, 00:20:16.188 "state": "enabled", 00:20:16.188 "thread": "nvmf_tgt_poll_group_000", 00:20:16.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:16.188 "listen_address": { 00:20:16.188 "trtype": "TCP", 00:20:16.188 "adrfam": "IPv4", 00:20:16.188 "traddr": "10.0.0.2", 00:20:16.188 "trsvcid": "4420" 00:20:16.188 }, 00:20:16.188 "peer_address": { 00:20:16.188 "trtype": "TCP", 00:20:16.188 "adrfam": "IPv4", 00:20:16.188 "traddr": "10.0.0.1", 00:20:16.188 "trsvcid": "60928" 00:20:16.188 }, 00:20:16.188 "auth": { 00:20:16.188 "state": "completed", 00:20:16.188 "digest": "sha256", 00:20:16.188 "dhgroup": "ffdhe3072" 00:20:16.188 } 00:20:16.188 } 00:20:16.188 ]' 00:20:16.188 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.447 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.707 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:16.707 22:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.277 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.538 22:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.799 00:20:17.799 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.799 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.799 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.060 { 00:20:18.060 "cntlid": 19, 00:20:18.060 "qid": 0, 00:20:18.060 "state": "enabled", 00:20:18.060 "thread": "nvmf_tgt_poll_group_000", 00:20:18.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.060 "listen_address": { 00:20:18.060 "trtype": "TCP", 00:20:18.060 "adrfam": "IPv4", 00:20:18.060 "traddr": "10.0.0.2", 00:20:18.060 "trsvcid": "4420" 00:20:18.060 }, 00:20:18.060 "peer_address": { 00:20:18.060 "trtype": "TCP", 00:20:18.060 "adrfam": "IPv4", 00:20:18.060 "traddr": "10.0.0.1", 00:20:18.060 "trsvcid": "60958" 00:20:18.060 }, 00:20:18.060 "auth": { 00:20:18.060 "state": "completed", 00:20:18.060 "digest": "sha256", 00:20:18.060 "dhgroup": "ffdhe3072" 00:20:18.060 } 00:20:18.060 } 00:20:18.060 ]' 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.060 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.321 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:18.321 22:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.891 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.151 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.412 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.412 { 00:20:19.412 "cntlid": 21, 00:20:19.412 "qid": 0, 00:20:19.412 "state": "enabled", 00:20:19.412 "thread": "nvmf_tgt_poll_group_000", 00:20:19.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:19.412 "listen_address": { 00:20:19.412 "trtype": "TCP", 00:20:19.412 "adrfam": "IPv4", 00:20:19.412 "traddr": "10.0.0.2", 00:20:19.412 "trsvcid": "4420" 00:20:19.412 }, 00:20:19.412 "peer_address": { 00:20:19.412 "trtype": "TCP", 00:20:19.412 "adrfam": "IPv4", 00:20:19.412 "traddr": "10.0.0.1", 00:20:19.412 "trsvcid": "60990" 00:20:19.412 }, 00:20:19.412 "auth": { 00:20:19.412 "state": "completed", 00:20:19.412 "digest": "sha256", 00:20:19.412 "dhgroup": "ffdhe3072" 00:20:19.412 } 00:20:19.412 } 00:20:19.412 ]' 00:20:19.412 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.676 22:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.937 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:19.937 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.506 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.507 22:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.767 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.028 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.028 { 00:20:21.028 "cntlid": 23, 00:20:21.028 "qid": 0, 00:20:21.028 "state": "enabled", 00:20:21.028 "thread": "nvmf_tgt_poll_group_000", 00:20:21.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.028 "listen_address": { 00:20:21.028 "trtype": "TCP", 00:20:21.028 "adrfam": "IPv4", 00:20:21.028 "traddr": "10.0.0.2", 00:20:21.028 "trsvcid": "4420" 00:20:21.028 }, 00:20:21.028 "peer_address": { 00:20:21.028 "trtype": "TCP", 00:20:21.028 "adrfam": "IPv4", 00:20:21.028 "traddr": "10.0.0.1", 00:20:21.028 "trsvcid": "32788" 00:20:21.028 }, 00:20:21.028 "auth": { 00:20:21.028 "state": "completed", 00:20:21.028 "digest": "sha256", 00:20:21.028 "dhgroup": "ffdhe3072" 00:20:21.028 } 00:20:21.028 } 00:20:21.028 ]' 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.028 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:21.289 22:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.230 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.491 00:20:22.491 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.491 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.491 22:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.752 { 00:20:22.752 "cntlid": 25, 00:20:22.752 "qid": 0, 00:20:22.752 "state": "enabled", 00:20:22.752 "thread": "nvmf_tgt_poll_group_000", 00:20:22.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:22.752 "listen_address": { 00:20:22.752 "trtype": "TCP", 00:20:22.752 "adrfam": "IPv4", 00:20:22.752 "traddr": "10.0.0.2", 00:20:22.752 "trsvcid": "4420" 00:20:22.752 }, 00:20:22.752 "peer_address": { 00:20:22.752 "trtype": "TCP", 00:20:22.752 "adrfam": "IPv4", 00:20:22.752 "traddr": "10.0.0.1", 00:20:22.752 "trsvcid": "39000" 00:20:22.752 }, 00:20:22.752 "auth": { 00:20:22.752 "state": "completed", 00:20:22.752 "digest": "sha256", 00:20:22.752 "dhgroup": "ffdhe4096" 00:20:22.752 } 00:20:22.752 } 00:20:22.752 ]' 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.752 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.013 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:23.013 22:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:23.584 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.584 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.584 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.584 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.844 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.105 00:20:24.105 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.105 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.105 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.365 { 00:20:24.365 "cntlid": 27, 00:20:24.365 "qid": 0, 00:20:24.365 "state": "enabled", 00:20:24.365 "thread": "nvmf_tgt_poll_group_000", 00:20:24.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:24.365 "listen_address": { 00:20:24.365 "trtype": "TCP", 00:20:24.365 "adrfam": "IPv4", 00:20:24.365 "traddr": "10.0.0.2", 00:20:24.365 "trsvcid": "4420" 00:20:24.365 }, 00:20:24.365 "peer_address": { 00:20:24.365 "trtype": "TCP", 00:20:24.365 "adrfam": "IPv4", 00:20:24.365 "traddr": "10.0.0.1", 00:20:24.365 "trsvcid": "39022" 00:20:24.365 }, 00:20:24.365 "auth": { 00:20:24.365 "state": "completed", 00:20:24.365 "digest": "sha256", 00:20:24.365 "dhgroup": "ffdhe4096" 00:20:24.365 } 00:20:24.365 } 00:20:24.365 ]' 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.365 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.626 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.626 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.626 22:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.626 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:24.626 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:25.198 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.458 22:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.719 00:20:25.719 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.719 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.719 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.979 { 00:20:25.979 "cntlid": 29, 00:20:25.979 "qid": 0, 00:20:25.979 "state": "enabled", 00:20:25.979 "thread": "nvmf_tgt_poll_group_000", 00:20:25.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.979 "listen_address": { 00:20:25.979 "trtype": "TCP", 00:20:25.979 "adrfam": "IPv4", 00:20:25.979 "traddr": "10.0.0.2", 00:20:25.979 "trsvcid": "4420" 00:20:25.979 }, 00:20:25.979 "peer_address": { 00:20:25.979 "trtype": "TCP", 00:20:25.979 "adrfam": "IPv4", 00:20:25.979 "traddr": "10.0.0.1", 00:20:25.979 "trsvcid": "39048" 00:20:25.979 }, 00:20:25.979 "auth": { 00:20:25.979 "state": "completed", 00:20:25.979 "digest": "sha256", 00:20:25.979 "dhgroup": "ffdhe4096" 00:20:25.979 } 00:20:25.979 } 00:20:25.979 ]' 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.979 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.240 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.240 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.240 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.240 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:26.241 22:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.182 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:27.443 00:20:27.443 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.443 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.443 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.704 { 00:20:27.704 "cntlid": 31, 00:20:27.704 "qid": 0, 00:20:27.704 "state": "enabled", 00:20:27.704 "thread": "nvmf_tgt_poll_group_000", 00:20:27.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:27.704 "listen_address": { 00:20:27.704 "trtype": "TCP", 00:20:27.704 "adrfam": "IPv4", 00:20:27.704 "traddr": "10.0.0.2", 00:20:27.704 "trsvcid": "4420" 00:20:27.704 }, 00:20:27.704 "peer_address": { 00:20:27.704 "trtype": "TCP", 00:20:27.704 "adrfam": "IPv4", 00:20:27.704 "traddr": "10.0.0.1", 00:20:27.704 "trsvcid": "39074" 00:20:27.704 }, 00:20:27.704 "auth": { 00:20:27.704 "state": "completed", 00:20:27.704 "digest": "sha256", 00:20:27.704 "dhgroup": "ffdhe4096" 00:20:27.704 } 00:20:27.704 } 00:20:27.704 ]' 00:20:27.704 22:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.704 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.964 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:27.965 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.537 22:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.798 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.059 00:20:29.059 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.059 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.059 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.319 { 00:20:29.319 "cntlid": 33, 00:20:29.319 "qid": 0, 00:20:29.319 "state": "enabled", 00:20:29.319 "thread": "nvmf_tgt_poll_group_000", 00:20:29.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:29.319 "listen_address": { 00:20:29.319 "trtype": "TCP", 00:20:29.319 "adrfam": "IPv4", 00:20:29.319 "traddr": "10.0.0.2", 00:20:29.319 "trsvcid": "4420" 00:20:29.319 }, 00:20:29.319 "peer_address": { 00:20:29.319 "trtype": "TCP", 00:20:29.319 "adrfam": "IPv4", 00:20:29.319 "traddr": "10.0.0.1", 00:20:29.319 "trsvcid": "39098" 00:20:29.319 }, 00:20:29.319 "auth": { 00:20:29.319 "state": "completed", 00:20:29.319 "digest": "sha256", 00:20:29.319 "dhgroup": "ffdhe6144" 00:20:29.319 } 00:20:29.319 } 00:20:29.319 ]' 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.319 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.584 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:29.585 22:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:30.161 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.161 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.161 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.161 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.421 22:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.992 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.992 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.992 { 00:20:30.992 "cntlid": 35, 00:20:30.992 "qid": 0, 00:20:30.992 "state": "enabled", 00:20:30.992 "thread": "nvmf_tgt_poll_group_000", 00:20:30.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.992 "listen_address": { 00:20:30.993 "trtype": "TCP", 00:20:30.993 "adrfam": "IPv4", 00:20:30.993 "traddr": "10.0.0.2", 00:20:30.993 "trsvcid": "4420" 00:20:30.993 }, 00:20:30.993 "peer_address": { 00:20:30.993 "trtype": "TCP", 00:20:30.993 "adrfam": "IPv4", 00:20:30.993 "traddr": "10.0.0.1", 00:20:30.993 "trsvcid": "39116" 00:20:30.993 }, 00:20:30.993 "auth": { 00:20:30.993 "state": "completed", 00:20:30.993 "digest": "sha256", 00:20:30.993 "dhgroup": "ffdhe6144" 00:20:30.993 } 00:20:30.993 } 00:20:30.993 ]' 00:20:30.993 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.993 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.993 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.253 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.253 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.254 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.254 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.254 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.254 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:31.254 22:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.195 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.455 00:20:32.716 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.716 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.716 22:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.716 { 00:20:32.716 "cntlid": 37, 00:20:32.716 "qid": 0, 00:20:32.716 "state": "enabled", 00:20:32.716 "thread": "nvmf_tgt_poll_group_000", 00:20:32.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:32.716 "listen_address": { 00:20:32.716 "trtype": "TCP", 00:20:32.716 "adrfam": "IPv4", 00:20:32.716 "traddr": "10.0.0.2", 00:20:32.716 "trsvcid": "4420" 00:20:32.716 }, 00:20:32.716 "peer_address": { 00:20:32.716 "trtype": "TCP", 00:20:32.716 "adrfam": "IPv4", 00:20:32.716 "traddr": "10.0.0.1", 00:20:32.716 "trsvcid": "46054" 00:20:32.716 }, 00:20:32.716 "auth": { 00:20:32.716 "state": "completed", 00:20:32.716 "digest": "sha256", 00:20:32.716 "dhgroup": "ffdhe6144" 00:20:32.716 } 00:20:32.716 } 00:20:32.716 ]' 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.716 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.980 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:32.981 22:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.924 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.184 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.444 { 00:20:34.444 "cntlid": 39, 00:20:34.444 "qid": 0, 00:20:34.444 "state": "enabled", 00:20:34.444 "thread": "nvmf_tgt_poll_group_000", 00:20:34.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:34.444 "listen_address": { 00:20:34.444 "trtype": "TCP", 00:20:34.444 "adrfam": "IPv4", 00:20:34.444 "traddr": "10.0.0.2", 00:20:34.444 "trsvcid": "4420" 00:20:34.444 }, 00:20:34.444 "peer_address": { 00:20:34.444 "trtype": "TCP", 00:20:34.444 "adrfam": "IPv4", 00:20:34.444 "traddr": "10.0.0.1", 00:20:34.444 "trsvcid": "46066" 00:20:34.444 }, 00:20:34.444 "auth": { 00:20:34.444 "state": "completed", 00:20:34.444 "digest": "sha256", 00:20:34.444 "dhgroup": "ffdhe6144" 00:20:34.444 } 00:20:34.444 } 00:20:34.444 ]' 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.444 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.704 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.705 22:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.705 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.705 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.705 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.965 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:34.965 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.537 22:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.798 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.059 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.319 { 00:20:36.319 "cntlid": 41, 00:20:36.319 "qid": 0, 00:20:36.319 "state": "enabled", 00:20:36.319 "thread": "nvmf_tgt_poll_group_000", 00:20:36.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.319 "listen_address": { 00:20:36.319 "trtype": "TCP", 00:20:36.319 "adrfam": "IPv4", 00:20:36.319 "traddr": "10.0.0.2", 00:20:36.319 "trsvcid": "4420" 00:20:36.319 }, 00:20:36.319 "peer_address": { 00:20:36.319 "trtype": "TCP", 00:20:36.319 "adrfam": "IPv4", 00:20:36.319 "traddr": "10.0.0.1", 00:20:36.319 "trsvcid": "46082" 00:20:36.319 }, 00:20:36.319 "auth": { 00:20:36.319 "state": "completed", 00:20:36.319 "digest": "sha256", 00:20:36.319 "dhgroup": "ffdhe8192" 00:20:36.319 } 00:20:36.319 } 00:20:36.319 ]' 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.319 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.581 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.581 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.581 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.581 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.581 22:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.581 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:36.581 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.521 22:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.092 00:20:38.092 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.092 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.092 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.352 { 00:20:38.352 "cntlid": 43, 00:20:38.352 "qid": 0, 00:20:38.352 "state": "enabled", 00:20:38.352 "thread": "nvmf_tgt_poll_group_000", 00:20:38.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:38.352 "listen_address": { 00:20:38.352 "trtype": "TCP", 00:20:38.352 "adrfam": "IPv4", 00:20:38.352 "traddr": "10.0.0.2", 00:20:38.352 "trsvcid": "4420" 00:20:38.352 }, 00:20:38.352 "peer_address": { 00:20:38.352 "trtype": "TCP", 00:20:38.352 "adrfam": "IPv4", 00:20:38.352 "traddr": "10.0.0.1", 00:20:38.352 "trsvcid": "46116" 00:20:38.352 }, 00:20:38.352 "auth": { 00:20:38.352 "state": "completed", 00:20:38.352 "digest": "sha256", 00:20:38.352 "dhgroup": "ffdhe8192" 00:20:38.352 } 00:20:38.352 } 00:20:38.352 ]' 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.352 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.612 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:38.612 22:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.181 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.441 22:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.011 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.011 { 00:20:40.011 "cntlid": 45, 00:20:40.011 "qid": 0, 00:20:40.011 "state": "enabled", 00:20:40.011 "thread": "nvmf_tgt_poll_group_000", 00:20:40.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:40.011 "listen_address": { 00:20:40.011 "trtype": "TCP", 00:20:40.011 "adrfam": "IPv4", 00:20:40.011 "traddr": "10.0.0.2", 00:20:40.011 "trsvcid": "4420" 00:20:40.011 }, 00:20:40.011 "peer_address": { 00:20:40.011 "trtype": "TCP", 00:20:40.011 "adrfam": "IPv4", 00:20:40.011 "traddr": "10.0.0.1", 00:20:40.011 "trsvcid": "46130" 00:20:40.011 }, 00:20:40.011 "auth": { 00:20:40.011 "state": "completed", 00:20:40.011 "digest": "sha256", 00:20:40.011 "dhgroup": "ffdhe8192" 00:20:40.011 } 00:20:40.011 } 00:20:40.011 ]' 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.011 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.272 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.272 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.272 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.272 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.272 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.533 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:40.533 22:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.105 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.365 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:41.365 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.365 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.365 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.366 22:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.626 00:20:41.626 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.627 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.627 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.888 { 00:20:41.888 "cntlid": 47, 00:20:41.888 "qid": 0, 00:20:41.888 "state": "enabled", 00:20:41.888 "thread": "nvmf_tgt_poll_group_000", 00:20:41.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.888 "listen_address": { 00:20:41.888 "trtype": "TCP", 00:20:41.888 "adrfam": "IPv4", 00:20:41.888 "traddr": "10.0.0.2", 00:20:41.888 "trsvcid": "4420" 00:20:41.888 }, 00:20:41.888 "peer_address": { 00:20:41.888 "trtype": "TCP", 00:20:41.888 "adrfam": "IPv4", 00:20:41.888 "traddr": "10.0.0.1", 00:20:41.888 "trsvcid": "46148" 00:20:41.888 }, 00:20:41.888 "auth": { 00:20:41.888 "state": "completed", 00:20:41.888 "digest": "sha256", 00:20:41.888 "dhgroup": "ffdhe8192" 00:20:41.888 } 00:20:41.888 } 00:20:41.888 ]' 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.888 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:42.150 22:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.093 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.094 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.094 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.355 00:20:43.355 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.355 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.355 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.616 { 00:20:43.616 "cntlid": 49, 00:20:43.616 "qid": 0, 00:20:43.616 "state": "enabled", 00:20:43.616 "thread": "nvmf_tgt_poll_group_000", 00:20:43.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:43.616 "listen_address": { 00:20:43.616 "trtype": "TCP", 00:20:43.616 "adrfam": "IPv4", 00:20:43.616 "traddr": "10.0.0.2", 00:20:43.616 "trsvcid": "4420" 00:20:43.616 }, 00:20:43.616 "peer_address": { 00:20:43.616 "trtype": "TCP", 00:20:43.616 "adrfam": "IPv4", 00:20:43.616 "traddr": "10.0.0.1", 00:20:43.616 "trsvcid": "56748" 00:20:43.616 }, 00:20:43.616 "auth": { 00:20:43.616 "state": "completed", 00:20:43.616 "digest": "sha384", 00:20:43.616 "dhgroup": "null" 00:20:43.616 } 00:20:43.616 } 00:20:43.616 ]' 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.616 22:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.616 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.616 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.616 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.877 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:43.877 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.449 22:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.709 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.710 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.971 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.971 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.231 { 00:20:45.231 "cntlid": 51, 00:20:45.231 "qid": 0, 00:20:45.231 "state": "enabled", 00:20:45.231 "thread": "nvmf_tgt_poll_group_000", 00:20:45.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.231 "listen_address": { 00:20:45.231 "trtype": "TCP", 00:20:45.231 "adrfam": "IPv4", 00:20:45.231 "traddr": "10.0.0.2", 00:20:45.231 "trsvcid": "4420" 00:20:45.231 }, 00:20:45.231 "peer_address": { 00:20:45.231 "trtype": "TCP", 00:20:45.231 "adrfam": "IPv4", 00:20:45.231 "traddr": "10.0.0.1", 00:20:45.231 "trsvcid": "56768" 00:20:45.231 }, 00:20:45.231 "auth": { 00:20:45.231 "state": "completed", 00:20:45.231 "digest": "sha384", 00:20:45.231 "dhgroup": "null" 00:20:45.231 } 00:20:45.231 } 00:20:45.231 ]' 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.231 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.493 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:45.493 22:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:46.063 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.063 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:46.063 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.063 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.063 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.064 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.064 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.064 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.324 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.585 00:20:46.585 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.585 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.585 22:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.585 { 00:20:46.585 "cntlid": 53, 00:20:46.585 "qid": 0, 00:20:46.585 "state": "enabled", 00:20:46.585 "thread": "nvmf_tgt_poll_group_000", 00:20:46.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:46.585 "listen_address": { 00:20:46.585 "trtype": "TCP", 00:20:46.585 "adrfam": "IPv4", 00:20:46.585 "traddr": "10.0.0.2", 00:20:46.585 "trsvcid": "4420" 00:20:46.585 }, 00:20:46.585 "peer_address": { 00:20:46.585 "trtype": "TCP", 00:20:46.585 "adrfam": "IPv4", 00:20:46.585 "traddr": "10.0.0.1", 00:20:46.585 "trsvcid": "56800" 00:20:46.585 }, 00:20:46.585 "auth": { 00:20:46.585 "state": "completed", 00:20:46.585 "digest": "sha384", 00:20:46.585 "dhgroup": "null" 00:20:46.585 } 00:20:46.585 } 00:20:46.585 ]' 00:20:46.585 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.846 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.106 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:47.106 22:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.678 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.938 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.198 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.199 { 00:20:48.199 "cntlid": 55, 00:20:48.199 "qid": 0, 00:20:48.199 "state": "enabled", 00:20:48.199 "thread": "nvmf_tgt_poll_group_000", 00:20:48.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:48.199 "listen_address": { 00:20:48.199 "trtype": "TCP", 00:20:48.199 "adrfam": "IPv4", 00:20:48.199 "traddr": "10.0.0.2", 00:20:48.199 "trsvcid": "4420" 00:20:48.199 }, 00:20:48.199 "peer_address": { 00:20:48.199 "trtype": "TCP", 00:20:48.199 "adrfam": "IPv4", 00:20:48.199 "traddr": "10.0.0.1", 00:20:48.199 "trsvcid": "56838" 00:20:48.199 }, 00:20:48.199 "auth": { 00:20:48.199 "state": "completed", 00:20:48.199 "digest": "sha384", 00:20:48.199 "dhgroup": "null" 00:20:48.199 } 00:20:48.199 } 00:20:48.199 ]' 00:20:48.199 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.459 22:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.720 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:48.720 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:49.290 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.291 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.551 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.552 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.552 22:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.552 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.812 { 00:20:49.812 "cntlid": 57, 00:20:49.812 "qid": 0, 00:20:49.812 "state": "enabled", 00:20:49.812 "thread": "nvmf_tgt_poll_group_000", 00:20:49.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:49.812 "listen_address": { 00:20:49.812 "trtype": "TCP", 00:20:49.812 "adrfam": "IPv4", 00:20:49.812 "traddr": "10.0.0.2", 00:20:49.812 "trsvcid": "4420" 00:20:49.812 }, 00:20:49.812 "peer_address": { 00:20:49.812 "trtype": "TCP", 00:20:49.812 "adrfam": "IPv4", 00:20:49.812 "traddr": "10.0.0.1", 00:20:49.812 "trsvcid": "56872" 00:20:49.812 }, 00:20:49.812 "auth": { 00:20:49.812 "state": "completed", 00:20:49.812 "digest": "sha384", 00:20:49.812 "dhgroup": "ffdhe2048" 00:20:49.812 } 00:20:49.812 } 00:20:49.812 ]' 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.812 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.071 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.332 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:50.332 22:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.903 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.164 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.164 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.424 { 00:20:51.424 "cntlid": 59, 00:20:51.424 "qid": 0, 00:20:51.424 "state": "enabled", 00:20:51.424 "thread": "nvmf_tgt_poll_group_000", 00:20:51.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:51.424 "listen_address": { 00:20:51.424 "trtype": "TCP", 00:20:51.424 "adrfam": "IPv4", 00:20:51.424 "traddr": "10.0.0.2", 00:20:51.424 "trsvcid": "4420" 00:20:51.424 }, 00:20:51.424 "peer_address": { 00:20:51.424 "trtype": "TCP", 00:20:51.424 "adrfam": "IPv4", 00:20:51.424 "traddr": "10.0.0.1", 00:20:51.424 "trsvcid": "56894" 00:20:51.424 }, 00:20:51.424 "auth": { 00:20:51.424 "state": "completed", 00:20:51.424 "digest": "sha384", 00:20:51.424 "dhgroup": "ffdhe2048" 00:20:51.424 } 00:20:51.424 } 00:20:51.424 ]' 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.424 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.684 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.684 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.684 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.684 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.684 22:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.684 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:51.685 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.627 22:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.888 00:20:52.888 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.888 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.888 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.149 { 00:20:53.149 "cntlid": 61, 00:20:53.149 "qid": 0, 00:20:53.149 "state": "enabled", 00:20:53.149 "thread": "nvmf_tgt_poll_group_000", 00:20:53.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:53.149 "listen_address": { 00:20:53.149 "trtype": "TCP", 00:20:53.149 "adrfam": "IPv4", 00:20:53.149 "traddr": "10.0.0.2", 00:20:53.149 "trsvcid": "4420" 00:20:53.149 }, 00:20:53.149 "peer_address": { 00:20:53.149 "trtype": "TCP", 00:20:53.149 "adrfam": "IPv4", 00:20:53.149 "traddr": "10.0.0.1", 00:20:53.149 "trsvcid": "34206" 00:20:53.149 }, 00:20:53.149 "auth": { 00:20:53.149 "state": "completed", 00:20:53.149 "digest": "sha384", 00:20:53.149 "dhgroup": "ffdhe2048" 00:20:53.149 } 00:20:53.149 } 00:20:53.149 ]' 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.149 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.413 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:53.413 22:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.984 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.289 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.608 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.608 22:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.608 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.608 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.608 { 00:20:54.608 "cntlid": 63, 00:20:54.608 "qid": 0, 00:20:54.608 "state": "enabled", 00:20:54.608 "thread": "nvmf_tgt_poll_group_000", 00:20:54.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.608 "listen_address": { 00:20:54.608 "trtype": "TCP", 00:20:54.608 "adrfam": "IPv4", 00:20:54.608 "traddr": "10.0.0.2", 00:20:54.608 "trsvcid": "4420" 00:20:54.608 }, 00:20:54.608 "peer_address": { 00:20:54.608 "trtype": "TCP", 00:20:54.608 "adrfam": "IPv4", 00:20:54.608 "traddr": "10.0.0.1", 00:20:54.608 "trsvcid": "34232" 00:20:54.608 }, 00:20:54.608 "auth": { 00:20:54.608 "state": "completed", 00:20:54.608 "digest": "sha384", 00:20:54.608 "dhgroup": "ffdhe2048" 00:20:54.608 } 00:20:54.608 } 00:20:54.608 ]' 00:20:54.608 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.608 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.608 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:54.927 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:20:55.524 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.524 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.524 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.524 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.524 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.525 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.525 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.525 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.525 22:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.785 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.045 00:20:56.045 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.045 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.045 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.306 { 00:20:56.306 "cntlid": 65, 00:20:56.306 "qid": 0, 00:20:56.306 "state": "enabled", 00:20:56.306 "thread": "nvmf_tgt_poll_group_000", 00:20:56.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.306 "listen_address": { 00:20:56.306 "trtype": "TCP", 00:20:56.306 "adrfam": "IPv4", 00:20:56.306 "traddr": "10.0.0.2", 00:20:56.306 "trsvcid": "4420" 00:20:56.306 }, 00:20:56.306 "peer_address": { 00:20:56.306 "trtype": "TCP", 00:20:56.306 "adrfam": "IPv4", 00:20:56.306 "traddr": "10.0.0.1", 00:20:56.306 "trsvcid": "34256" 00:20:56.306 }, 00:20:56.306 "auth": { 00:20:56.306 "state": "completed", 00:20:56.306 "digest": "sha384", 00:20:56.306 "dhgroup": "ffdhe3072" 00:20:56.306 } 00:20:56.306 } 00:20:56.306 ]' 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.306 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.566 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:56.566 22:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.137 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.398 22:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.659 00:20:57.659 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.659 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.659 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.920 { 00:20:57.920 "cntlid": 67, 00:20:57.920 "qid": 0, 00:20:57.920 "state": "enabled", 00:20:57.920 "thread": "nvmf_tgt_poll_group_000", 00:20:57.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:57.920 "listen_address": { 00:20:57.920 "trtype": "TCP", 00:20:57.920 "adrfam": "IPv4", 00:20:57.920 "traddr": "10.0.0.2", 00:20:57.920 "trsvcid": "4420" 00:20:57.920 }, 00:20:57.920 "peer_address": { 00:20:57.920 "trtype": "TCP", 00:20:57.920 "adrfam": "IPv4", 00:20:57.920 "traddr": "10.0.0.1", 00:20:57.920 "trsvcid": "34284" 00:20:57.920 }, 00:20:57.920 "auth": { 00:20:57.920 "state": "completed", 00:20:57.920 "digest": "sha384", 00:20:57.920 "dhgroup": "ffdhe3072" 00:20:57.920 } 00:20:57.920 } 00:20:57.920 ]' 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.920 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.180 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:58.180 22:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.752 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.013 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.273 00:20:59.273 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.273 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.273 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.534 { 00:20:59.534 "cntlid": 69, 00:20:59.534 "qid": 0, 00:20:59.534 "state": "enabled", 00:20:59.534 "thread": "nvmf_tgt_poll_group_000", 00:20:59.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.534 "listen_address": { 00:20:59.534 "trtype": "TCP", 00:20:59.534 "adrfam": "IPv4", 00:20:59.534 "traddr": "10.0.0.2", 00:20:59.534 "trsvcid": "4420" 00:20:59.534 }, 00:20:59.534 "peer_address": { 00:20:59.534 "trtype": "TCP", 00:20:59.534 "adrfam": "IPv4", 00:20:59.534 "traddr": "10.0.0.1", 00:20:59.534 "trsvcid": "34292" 00:20:59.534 }, 00:20:59.534 "auth": { 00:20:59.534 "state": "completed", 00:20:59.534 "digest": "sha384", 00:20:59.534 "dhgroup": "ffdhe3072" 00:20:59.534 } 00:20:59.534 } 00:20:59.534 ]' 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.534 22:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.794 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:20:59.794 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.364 22:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.626 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.887 00:21:00.887 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.887 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.887 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.148 { 00:21:01.148 "cntlid": 71, 00:21:01.148 "qid": 0, 00:21:01.148 "state": "enabled", 00:21:01.148 "thread": "nvmf_tgt_poll_group_000", 00:21:01.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.148 "listen_address": { 00:21:01.148 "trtype": "TCP", 00:21:01.148 "adrfam": "IPv4", 00:21:01.148 "traddr": "10.0.0.2", 00:21:01.148 "trsvcid": "4420" 00:21:01.148 }, 00:21:01.148 "peer_address": { 00:21:01.148 "trtype": "TCP", 00:21:01.148 "adrfam": "IPv4", 00:21:01.148 "traddr": "10.0.0.1", 00:21:01.148 "trsvcid": "34318" 00:21:01.148 }, 00:21:01.148 "auth": { 00:21:01.148 "state": "completed", 00:21:01.148 "digest": "sha384", 00:21:01.148 "dhgroup": "ffdhe3072" 00:21:01.148 } 00:21:01.148 } 00:21:01.148 ]' 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.148 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.409 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:01.409 22:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:01.979 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.979 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.979 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.980 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.240 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.241 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.501 00:21:02.501 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.501 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.501 22:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.762 { 00:21:02.762 "cntlid": 73, 00:21:02.762 "qid": 0, 00:21:02.762 "state": "enabled", 00:21:02.762 "thread": "nvmf_tgt_poll_group_000", 00:21:02.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:02.762 "listen_address": { 00:21:02.762 "trtype": "TCP", 00:21:02.762 "adrfam": "IPv4", 00:21:02.762 "traddr": "10.0.0.2", 00:21:02.762 "trsvcid": "4420" 00:21:02.762 }, 00:21:02.762 "peer_address": { 00:21:02.762 "trtype": "TCP", 00:21:02.762 "adrfam": "IPv4", 00:21:02.762 "traddr": "10.0.0.1", 00:21:02.762 "trsvcid": "41150" 00:21:02.762 }, 00:21:02.762 "auth": { 00:21:02.762 "state": "completed", 00:21:02.762 "digest": "sha384", 00:21:02.762 "dhgroup": "ffdhe4096" 00:21:02.762 } 00:21:02.762 } 00:21:02.762 ]' 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.762 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.023 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:03.023 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:03.593 22:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.593 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.853 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.113 00:21:04.113 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.113 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.113 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.374 { 00:21:04.374 "cntlid": 75, 00:21:04.374 "qid": 0, 00:21:04.374 "state": "enabled", 00:21:04.374 "thread": "nvmf_tgt_poll_group_000", 00:21:04.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.374 "listen_address": { 00:21:04.374 "trtype": "TCP", 00:21:04.374 "adrfam": "IPv4", 00:21:04.374 "traddr": "10.0.0.2", 00:21:04.374 "trsvcid": "4420" 00:21:04.374 }, 00:21:04.374 "peer_address": { 00:21:04.374 "trtype": "TCP", 00:21:04.374 "adrfam": "IPv4", 00:21:04.374 "traddr": "10.0.0.1", 00:21:04.374 "trsvcid": "41182" 00:21:04.374 }, 00:21:04.374 "auth": { 00:21:04.374 "state": "completed", 00:21:04.374 "digest": "sha384", 00:21:04.374 "dhgroup": "ffdhe4096" 00:21:04.374 } 00:21:04.374 } 00:21:04.374 ]' 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.374 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.635 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:04.635 22:09:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.206 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.468 22:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.728 00:21:05.728 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.729 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.729 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.989 { 00:21:05.989 "cntlid": 77, 00:21:05.989 "qid": 0, 00:21:05.989 "state": "enabled", 00:21:05.989 "thread": "nvmf_tgt_poll_group_000", 00:21:05.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:05.989 "listen_address": { 00:21:05.989 "trtype": "TCP", 00:21:05.989 "adrfam": "IPv4", 00:21:05.989 "traddr": "10.0.0.2", 00:21:05.989 "trsvcid": "4420" 00:21:05.989 }, 00:21:05.989 "peer_address": { 00:21:05.989 "trtype": "TCP", 00:21:05.989 "adrfam": "IPv4", 00:21:05.989 "traddr": "10.0.0.1", 00:21:05.989 "trsvcid": "41208" 00:21:05.989 }, 00:21:05.989 "auth": { 00:21:05.989 "state": "completed", 00:21:05.989 "digest": "sha384", 00:21:05.989 "dhgroup": "ffdhe4096" 00:21:05.989 } 00:21:05.989 } 00:21:05.989 ]' 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.989 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.251 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:06.251 22:09:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:06.822 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.082 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:07.343 00:21:07.343 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.343 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.343 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.604 { 00:21:07.604 "cntlid": 79, 00:21:07.604 "qid": 0, 00:21:07.604 "state": "enabled", 00:21:07.604 "thread": "nvmf_tgt_poll_group_000", 00:21:07.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:07.604 "listen_address": { 00:21:07.604 "trtype": "TCP", 00:21:07.604 "adrfam": "IPv4", 00:21:07.604 "traddr": "10.0.0.2", 00:21:07.604 "trsvcid": "4420" 00:21:07.604 }, 00:21:07.604 "peer_address": { 00:21:07.604 "trtype": "TCP", 00:21:07.604 "adrfam": "IPv4", 00:21:07.604 "traddr": "10.0.0.1", 00:21:07.604 "trsvcid": "41242" 00:21:07.604 }, 00:21:07.604 "auth": { 00:21:07.604 "state": "completed", 00:21:07.604 "digest": "sha384", 00:21:07.604 "dhgroup": "ffdhe4096" 00:21:07.604 } 00:21:07.604 } 00:21:07.604 ]' 00:21:07.604 22:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.604 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.604 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.604 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.604 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.865 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.865 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.865 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.865 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:07.865 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.806 22:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.806 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.066 00:21:09.066 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.066 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.066 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.326 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.326 { 00:21:09.326 "cntlid": 81, 00:21:09.326 "qid": 0, 00:21:09.327 "state": "enabled", 00:21:09.327 "thread": "nvmf_tgt_poll_group_000", 00:21:09.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:09.327 "listen_address": { 00:21:09.327 "trtype": "TCP", 00:21:09.327 "adrfam": "IPv4", 00:21:09.327 "traddr": "10.0.0.2", 00:21:09.327 "trsvcid": "4420" 00:21:09.327 }, 00:21:09.327 "peer_address": { 00:21:09.327 "trtype": "TCP", 00:21:09.327 "adrfam": "IPv4", 00:21:09.327 "traddr": "10.0.0.1", 00:21:09.327 "trsvcid": "41288" 00:21:09.327 }, 00:21:09.327 "auth": { 00:21:09.327 "state": "completed", 00:21:09.327 "digest": "sha384", 00:21:09.327 "dhgroup": "ffdhe6144" 00:21:09.327 } 00:21:09.327 } 00:21:09.327 ]' 00:21:09.327 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.327 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.327 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.327 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.327 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.587 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.587 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.587 22:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.587 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:09.587 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.527 22:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.787 00:21:10.787 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.787 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.787 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.048 { 00:21:11.048 "cntlid": 83, 00:21:11.048 "qid": 0, 00:21:11.048 "state": "enabled", 00:21:11.048 "thread": "nvmf_tgt_poll_group_000", 00:21:11.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:11.048 "listen_address": { 00:21:11.048 "trtype": "TCP", 00:21:11.048 "adrfam": "IPv4", 00:21:11.048 "traddr": "10.0.0.2", 00:21:11.048 "trsvcid": "4420" 00:21:11.048 }, 00:21:11.048 "peer_address": { 00:21:11.048 "trtype": "TCP", 00:21:11.048 "adrfam": "IPv4", 00:21:11.048 "traddr": "10.0.0.1", 00:21:11.048 "trsvcid": "41312" 00:21:11.048 }, 00:21:11.048 "auth": { 00:21:11.048 "state": "completed", 00:21:11.048 "digest": "sha384", 00:21:11.048 "dhgroup": "ffdhe6144" 00:21:11.048 } 00:21:11.048 } 00:21:11.048 ]' 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.048 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:11.308 22:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.249 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.509 00:21:12.509 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.509 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.509 22:09:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.769 { 00:21:12.769 "cntlid": 85, 00:21:12.769 "qid": 0, 00:21:12.769 "state": "enabled", 00:21:12.769 "thread": "nvmf_tgt_poll_group_000", 00:21:12.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.769 "listen_address": { 00:21:12.769 "trtype": "TCP", 00:21:12.769 "adrfam": "IPv4", 00:21:12.769 "traddr": "10.0.0.2", 00:21:12.769 "trsvcid": "4420" 00:21:12.769 }, 00:21:12.769 "peer_address": { 00:21:12.769 "trtype": "TCP", 00:21:12.769 "adrfam": "IPv4", 00:21:12.769 "traddr": "10.0.0.1", 00:21:12.769 "trsvcid": "48002" 00:21:12.769 }, 00:21:12.769 "auth": { 00:21:12.769 "state": "completed", 00:21:12.769 "digest": "sha384", 00:21:12.769 "dhgroup": "ffdhe6144" 00:21:12.769 } 00:21:12.769 } 00:21:12.769 ]' 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.769 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:13.030 22:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:13.970 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.230 00:21:14.230 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.230 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.230 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.490 { 00:21:14.490 "cntlid": 87, 00:21:14.490 "qid": 0, 00:21:14.490 "state": "enabled", 00:21:14.490 "thread": "nvmf_tgt_poll_group_000", 00:21:14.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:14.490 "listen_address": { 00:21:14.490 "trtype": "TCP", 00:21:14.490 "adrfam": "IPv4", 00:21:14.490 "traddr": "10.0.0.2", 00:21:14.490 "trsvcid": "4420" 00:21:14.490 }, 00:21:14.490 "peer_address": { 00:21:14.490 "trtype": "TCP", 00:21:14.490 "adrfam": "IPv4", 00:21:14.490 "traddr": "10.0.0.1", 00:21:14.490 "trsvcid": "48016" 00:21:14.490 }, 00:21:14.490 "auth": { 00:21:14.490 "state": "completed", 00:21:14.490 "digest": "sha384", 00:21:14.490 "dhgroup": "ffdhe6144" 00:21:14.490 } 00:21:14.490 } 00:21:14.490 ]' 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.490 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.751 22:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:14.751 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:15.691 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.691 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.691 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.691 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.691 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.692 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.692 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.692 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.692 22:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.692 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.263 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.263 { 00:21:16.263 "cntlid": 89, 00:21:16.263 "qid": 0, 00:21:16.263 "state": "enabled", 00:21:16.263 "thread": "nvmf_tgt_poll_group_000", 00:21:16.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.263 "listen_address": { 00:21:16.263 "trtype": "TCP", 00:21:16.263 "adrfam": "IPv4", 00:21:16.263 "traddr": "10.0.0.2", 00:21:16.263 "trsvcid": "4420" 00:21:16.263 }, 00:21:16.263 "peer_address": { 00:21:16.263 "trtype": "TCP", 00:21:16.263 "adrfam": "IPv4", 00:21:16.263 "traddr": "10.0.0.1", 00:21:16.263 "trsvcid": "48038" 00:21:16.263 }, 00:21:16.263 "auth": { 00:21:16.263 "state": "completed", 00:21:16.263 "digest": "sha384", 00:21:16.263 "dhgroup": "ffdhe8192" 00:21:16.263 } 00:21:16.263 } 00:21:16.263 ]' 00:21:16.263 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.524 22:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.784 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:16.784 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.355 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.615 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.616 22:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.876 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.137 { 00:21:18.137 "cntlid": 91, 00:21:18.137 "qid": 0, 00:21:18.137 "state": "enabled", 00:21:18.137 "thread": "nvmf_tgt_poll_group_000", 00:21:18.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:18.137 "listen_address": { 00:21:18.137 "trtype": "TCP", 00:21:18.137 "adrfam": "IPv4", 00:21:18.137 "traddr": "10.0.0.2", 00:21:18.137 "trsvcid": "4420" 00:21:18.137 }, 00:21:18.137 "peer_address": { 00:21:18.137 "trtype": "TCP", 00:21:18.137 "adrfam": "IPv4", 00:21:18.137 "traddr": "10.0.0.1", 00:21:18.137 "trsvcid": "48054" 00:21:18.137 }, 00:21:18.137 "auth": { 00:21:18.137 "state": "completed", 00:21:18.137 "digest": "sha384", 00:21:18.137 "dhgroup": "ffdhe8192" 00:21:18.137 } 00:21:18.137 } 00:21:18.137 ]' 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.137 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:18.397 22:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.337 22:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.911 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.911 { 00:21:19.911 "cntlid": 93, 00:21:19.911 "qid": 0, 00:21:19.911 "state": "enabled", 00:21:19.911 "thread": "nvmf_tgt_poll_group_000", 00:21:19.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.911 "listen_address": { 00:21:19.911 "trtype": "TCP", 00:21:19.911 "adrfam": "IPv4", 00:21:19.911 "traddr": "10.0.0.2", 00:21:19.911 "trsvcid": "4420" 00:21:19.911 }, 00:21:19.911 "peer_address": { 00:21:19.911 "trtype": "TCP", 00:21:19.911 "adrfam": "IPv4", 00:21:19.911 "traddr": "10.0.0.1", 00:21:19.911 "trsvcid": "48098" 00:21:19.911 }, 00:21:19.911 "auth": { 00:21:19.911 "state": "completed", 00:21:19.911 "digest": "sha384", 00:21:19.911 "dhgroup": "ffdhe8192" 00:21:19.911 } 00:21:19.911 } 00:21:19.911 ]' 00:21:19.911 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.172 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.433 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:20.433 22:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.003 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.263 22:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.524 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.784 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.784 { 00:21:21.784 "cntlid": 95, 00:21:21.784 "qid": 0, 00:21:21.784 "state": "enabled", 00:21:21.784 "thread": "nvmf_tgt_poll_group_000", 00:21:21.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.784 "listen_address": { 00:21:21.784 "trtype": "TCP", 00:21:21.784 "adrfam": "IPv4", 00:21:21.784 "traddr": "10.0.0.2", 00:21:21.784 "trsvcid": "4420" 00:21:21.784 }, 00:21:21.784 "peer_address": { 00:21:21.784 "trtype": "TCP", 00:21:21.784 "adrfam": "IPv4", 00:21:21.784 "traddr": "10.0.0.1", 00:21:21.785 "trsvcid": "48134" 00:21:21.785 }, 00:21:21.785 "auth": { 00:21:21.785 "state": "completed", 00:21:21.785 "digest": "sha384", 00:21:21.785 "dhgroup": "ffdhe8192" 00:21:21.785 } 00:21:21.785 } 00:21:21.785 ]' 00:21:21.785 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.047 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.307 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:22.307 22:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:22.876 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.876 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.876 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.877 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.137 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.137 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.397 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.397 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.397 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.397 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.397 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.398 { 00:21:23.398 "cntlid": 97, 00:21:23.398 "qid": 0, 00:21:23.398 "state": "enabled", 00:21:23.398 "thread": "nvmf_tgt_poll_group_000", 00:21:23.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:23.398 "listen_address": { 00:21:23.398 "trtype": "TCP", 00:21:23.398 "adrfam": "IPv4", 00:21:23.398 "traddr": "10.0.0.2", 00:21:23.398 "trsvcid": "4420" 00:21:23.398 }, 00:21:23.398 "peer_address": { 00:21:23.398 "trtype": "TCP", 00:21:23.398 "adrfam": "IPv4", 00:21:23.398 "traddr": "10.0.0.1", 00:21:23.398 "trsvcid": "43754" 00:21:23.398 }, 00:21:23.398 "auth": { 00:21:23.398 "state": "completed", 00:21:23.398 "digest": "sha512", 00:21:23.398 "dhgroup": "null" 00:21:23.398 } 00:21:23.398 } 00:21:23.398 ]' 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.398 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.658 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.658 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.658 22:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.658 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:23.658 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.621 22:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.936 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.936 { 00:21:24.936 "cntlid": 99, 00:21:24.936 "qid": 0, 00:21:24.936 "state": "enabled", 00:21:24.936 "thread": "nvmf_tgt_poll_group_000", 00:21:24.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.936 "listen_address": { 00:21:24.936 "trtype": "TCP", 00:21:24.936 "adrfam": "IPv4", 00:21:24.936 "traddr": "10.0.0.2", 00:21:24.936 "trsvcid": "4420" 00:21:24.936 }, 00:21:24.936 "peer_address": { 00:21:24.936 "trtype": "TCP", 00:21:24.936 "adrfam": "IPv4", 00:21:24.936 "traddr": "10.0.0.1", 00:21:24.936 "trsvcid": "43768" 00:21:24.936 }, 00:21:24.936 "auth": { 00:21:24.936 "state": "completed", 00:21:24.936 "digest": "sha512", 00:21:24.936 "dhgroup": "null" 00:21:24.936 } 00:21:24.936 } 00:21:24.936 ]' 00:21:24.936 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.201 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.462 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:25.462 22:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.035 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.295 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.295 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.555 22:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.555 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.555 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.555 { 00:21:26.555 "cntlid": 101, 00:21:26.555 "qid": 0, 00:21:26.555 "state": "enabled", 00:21:26.555 "thread": "nvmf_tgt_poll_group_000", 00:21:26.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.555 "listen_address": { 00:21:26.555 "trtype": "TCP", 00:21:26.555 "adrfam": "IPv4", 00:21:26.555 "traddr": "10.0.0.2", 00:21:26.555 "trsvcid": "4420" 00:21:26.555 }, 00:21:26.555 "peer_address": { 00:21:26.555 "trtype": "TCP", 00:21:26.555 "adrfam": "IPv4", 00:21:26.555 "traddr": "10.0.0.1", 00:21:26.555 "trsvcid": "43798" 00:21:26.555 }, 00:21:26.555 "auth": { 00:21:26.555 "state": "completed", 00:21:26.555 "digest": "sha512", 00:21:26.555 "dhgroup": "null" 00:21:26.555 } 00:21:26.555 } 00:21:26.555 ]' 00:21:26.555 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.555 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:26.816 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.756 22:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.756 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.016 00:21:28.016 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.016 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.016 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.277 { 00:21:28.277 "cntlid": 103, 00:21:28.277 "qid": 0, 00:21:28.277 "state": "enabled", 00:21:28.277 "thread": "nvmf_tgt_poll_group_000", 00:21:28.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:28.277 "listen_address": { 00:21:28.277 "trtype": "TCP", 00:21:28.277 "adrfam": "IPv4", 00:21:28.277 "traddr": "10.0.0.2", 00:21:28.277 "trsvcid": "4420" 00:21:28.277 }, 00:21:28.277 "peer_address": { 00:21:28.277 "trtype": "TCP", 00:21:28.277 "adrfam": "IPv4", 00:21:28.277 "traddr": "10.0.0.1", 00:21:28.277 "trsvcid": "43824" 00:21:28.277 }, 00:21:28.277 "auth": { 00:21:28.277 "state": "completed", 00:21:28.277 "digest": "sha512", 00:21:28.277 "dhgroup": "null" 00:21:28.277 } 00:21:28.277 } 00:21:28.277 ]' 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.277 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.538 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:28.538 22:09:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.109 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.369 22:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.628 00:21:29.628 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.628 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.628 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.888 { 00:21:29.888 "cntlid": 105, 00:21:29.888 "qid": 0, 00:21:29.888 "state": "enabled", 00:21:29.888 "thread": "nvmf_tgt_poll_group_000", 00:21:29.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:29.888 "listen_address": { 00:21:29.888 "trtype": "TCP", 00:21:29.888 "adrfam": "IPv4", 00:21:29.888 "traddr": "10.0.0.2", 00:21:29.888 "trsvcid": "4420" 00:21:29.888 }, 00:21:29.888 "peer_address": { 00:21:29.888 "trtype": "TCP", 00:21:29.888 "adrfam": "IPv4", 00:21:29.888 "traddr": "10.0.0.1", 00:21:29.888 "trsvcid": "43866" 00:21:29.888 }, 00:21:29.888 "auth": { 00:21:29.888 "state": "completed", 00:21:29.888 "digest": "sha512", 00:21:29.888 "dhgroup": "ffdhe2048" 00:21:29.888 } 00:21:29.888 } 00:21:29.888 ]' 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.888 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.147 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:30.147 22:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.716 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.976 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.236 00:21:31.236 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.236 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.236 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.497 { 00:21:31.497 "cntlid": 107, 00:21:31.497 "qid": 0, 00:21:31.497 "state": "enabled", 00:21:31.497 "thread": "nvmf_tgt_poll_group_000", 00:21:31.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.497 "listen_address": { 00:21:31.497 "trtype": "TCP", 00:21:31.497 "adrfam": "IPv4", 00:21:31.497 "traddr": "10.0.0.2", 00:21:31.497 "trsvcid": "4420" 00:21:31.497 }, 00:21:31.497 "peer_address": { 00:21:31.497 "trtype": "TCP", 00:21:31.497 "adrfam": "IPv4", 00:21:31.497 "traddr": "10.0.0.1", 00:21:31.497 "trsvcid": "43898" 00:21:31.497 }, 00:21:31.497 "auth": { 00:21:31.497 "state": "completed", 00:21:31.497 "digest": "sha512", 00:21:31.497 "dhgroup": "ffdhe2048" 00:21:31.497 } 00:21:31.497 } 00:21:31.497 ]' 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.497 22:09:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.758 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:31.758 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.330 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.591 22:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.852 00:21:32.852 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.852 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.852 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.113 { 00:21:33.113 "cntlid": 109, 00:21:33.113 "qid": 0, 00:21:33.113 "state": "enabled", 00:21:33.113 "thread": "nvmf_tgt_poll_group_000", 00:21:33.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.113 "listen_address": { 00:21:33.113 "trtype": "TCP", 00:21:33.113 "adrfam": "IPv4", 00:21:33.113 "traddr": "10.0.0.2", 00:21:33.113 "trsvcid": "4420" 00:21:33.113 }, 00:21:33.113 "peer_address": { 00:21:33.113 "trtype": "TCP", 00:21:33.113 "adrfam": "IPv4", 00:21:33.113 "traddr": "10.0.0.1", 00:21:33.113 "trsvcid": "37366" 00:21:33.113 }, 00:21:33.113 "auth": { 00:21:33.113 "state": "completed", 00:21:33.113 "digest": "sha512", 00:21:33.113 "dhgroup": "ffdhe2048" 00:21:33.113 } 00:21:33.113 } 00:21:33.113 ]' 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.113 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.373 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:33.373 22:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.945 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.206 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.466 00:21:34.466 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.466 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.466 22:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.727 { 00:21:34.727 "cntlid": 111, 00:21:34.727 "qid": 0, 00:21:34.727 "state": "enabled", 00:21:34.727 "thread": "nvmf_tgt_poll_group_000", 00:21:34.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:34.727 "listen_address": { 00:21:34.727 "trtype": "TCP", 00:21:34.727 "adrfam": "IPv4", 00:21:34.727 "traddr": "10.0.0.2", 00:21:34.727 "trsvcid": "4420" 00:21:34.727 }, 00:21:34.727 "peer_address": { 00:21:34.727 "trtype": "TCP", 00:21:34.727 "adrfam": "IPv4", 00:21:34.727 "traddr": "10.0.0.1", 00:21:34.727 "trsvcid": "37386" 00:21:34.727 }, 00:21:34.727 "auth": { 00:21:34.727 "state": "completed", 00:21:34.727 "digest": "sha512", 00:21:34.727 "dhgroup": "ffdhe2048" 00:21:34.727 } 00:21:34.727 } 00:21:34.727 ]' 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.727 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.987 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:34.987 22:09:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.559 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.820 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.080 00:21:36.080 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.080 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.080 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.341 { 00:21:36.341 "cntlid": 113, 00:21:36.341 "qid": 0, 00:21:36.341 "state": "enabled", 00:21:36.341 "thread": "nvmf_tgt_poll_group_000", 00:21:36.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.341 "listen_address": { 00:21:36.341 "trtype": "TCP", 00:21:36.341 "adrfam": "IPv4", 00:21:36.341 "traddr": "10.0.0.2", 00:21:36.341 "trsvcid": "4420" 00:21:36.341 }, 00:21:36.341 "peer_address": { 00:21:36.341 "trtype": "TCP", 00:21:36.341 "adrfam": "IPv4", 00:21:36.341 "traddr": "10.0.0.1", 00:21:36.341 "trsvcid": "37410" 00:21:36.341 }, 00:21:36.341 "auth": { 00:21:36.341 "state": "completed", 00:21:36.341 "digest": "sha512", 00:21:36.341 "dhgroup": "ffdhe3072" 00:21:36.341 } 00:21:36.341 } 00:21:36.341 ]' 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.341 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.602 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:36.602 22:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.173 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.433 22:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.694 00:21:37.694 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.694 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.694 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.955 { 00:21:37.955 "cntlid": 115, 00:21:37.955 "qid": 0, 00:21:37.955 "state": "enabled", 00:21:37.955 "thread": "nvmf_tgt_poll_group_000", 00:21:37.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:37.955 "listen_address": { 00:21:37.955 "trtype": "TCP", 00:21:37.955 "adrfam": "IPv4", 00:21:37.955 "traddr": "10.0.0.2", 00:21:37.955 "trsvcid": "4420" 00:21:37.955 }, 00:21:37.955 "peer_address": { 00:21:37.955 "trtype": "TCP", 00:21:37.955 "adrfam": "IPv4", 00:21:37.955 "traddr": "10.0.0.1", 00:21:37.955 "trsvcid": "37422" 00:21:37.955 }, 00:21:37.955 "auth": { 00:21:37.955 "state": "completed", 00:21:37.955 "digest": "sha512", 00:21:37.955 "dhgroup": "ffdhe3072" 00:21:37.955 } 00:21:37.955 } 00:21:37.955 ]' 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.955 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.215 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:38.215 22:09:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.786 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.047 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.308 00:21:39.308 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.308 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.308 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.569 { 00:21:39.569 "cntlid": 117, 00:21:39.569 "qid": 0, 00:21:39.569 "state": "enabled", 00:21:39.569 "thread": "nvmf_tgt_poll_group_000", 00:21:39.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.569 "listen_address": { 00:21:39.569 "trtype": "TCP", 00:21:39.569 "adrfam": "IPv4", 00:21:39.569 "traddr": "10.0.0.2", 00:21:39.569 "trsvcid": "4420" 00:21:39.569 }, 00:21:39.569 "peer_address": { 00:21:39.569 "trtype": "TCP", 00:21:39.569 "adrfam": "IPv4", 00:21:39.569 "traddr": "10.0.0.1", 00:21:39.569 "trsvcid": "37440" 00:21:39.569 }, 00:21:39.569 "auth": { 00:21:39.569 "state": "completed", 00:21:39.569 "digest": "sha512", 00:21:39.569 "dhgroup": "ffdhe3072" 00:21:39.569 } 00:21:39.569 } 00:21:39.569 ]' 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.569 22:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.569 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.569 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.569 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.829 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:39.829 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.398 22:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.658 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.919 00:21:40.919 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.919 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.919 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.180 { 00:21:41.180 "cntlid": 119, 00:21:41.180 "qid": 0, 00:21:41.180 "state": "enabled", 00:21:41.180 "thread": "nvmf_tgt_poll_group_000", 00:21:41.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.180 "listen_address": { 00:21:41.180 "trtype": "TCP", 00:21:41.180 "adrfam": "IPv4", 00:21:41.180 "traddr": "10.0.0.2", 00:21:41.180 "trsvcid": "4420" 00:21:41.180 }, 00:21:41.180 "peer_address": { 00:21:41.180 "trtype": "TCP", 00:21:41.180 "adrfam": "IPv4", 00:21:41.180 "traddr": "10.0.0.1", 00:21:41.180 "trsvcid": "37460" 00:21:41.180 }, 00:21:41.180 "auth": { 00:21:41.180 "state": "completed", 00:21:41.180 "digest": "sha512", 00:21:41.180 "dhgroup": "ffdhe3072" 00:21:41.180 } 00:21:41.180 } 00:21:41.180 ]' 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.180 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.441 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:41.441 22:09:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.012 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.272 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.273 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.533 00:21:42.533 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.533 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.533 22:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.794 { 00:21:42.794 "cntlid": 121, 00:21:42.794 "qid": 0, 00:21:42.794 "state": "enabled", 00:21:42.794 "thread": "nvmf_tgt_poll_group_000", 00:21:42.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:42.794 "listen_address": { 00:21:42.794 "trtype": "TCP", 00:21:42.794 "adrfam": "IPv4", 00:21:42.794 "traddr": "10.0.0.2", 00:21:42.794 "trsvcid": "4420" 00:21:42.794 }, 00:21:42.794 "peer_address": { 00:21:42.794 "trtype": "TCP", 00:21:42.794 "adrfam": "IPv4", 00:21:42.794 "traddr": "10.0.0.1", 00:21:42.794 "trsvcid": "42374" 00:21:42.794 }, 00:21:42.794 "auth": { 00:21:42.794 "state": "completed", 00:21:42.794 "digest": "sha512", 00:21:42.794 "dhgroup": "ffdhe4096" 00:21:42.794 } 00:21:42.794 } 00:21:42.794 ]' 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.794 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.057 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:43.057 22:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.627 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.888 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.148 00:21:44.148 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.148 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.148 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.409 { 00:21:44.409 "cntlid": 123, 00:21:44.409 "qid": 0, 00:21:44.409 "state": "enabled", 00:21:44.409 "thread": "nvmf_tgt_poll_group_000", 00:21:44.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.409 "listen_address": { 00:21:44.409 "trtype": "TCP", 00:21:44.409 "adrfam": "IPv4", 00:21:44.409 "traddr": "10.0.0.2", 00:21:44.409 "trsvcid": "4420" 00:21:44.409 }, 00:21:44.409 "peer_address": { 00:21:44.409 "trtype": "TCP", 00:21:44.409 "adrfam": "IPv4", 00:21:44.409 "traddr": "10.0.0.1", 00:21:44.409 "trsvcid": "42406" 00:21:44.409 }, 00:21:44.409 "auth": { 00:21:44.409 "state": "completed", 00:21:44.409 "digest": "sha512", 00:21:44.409 "dhgroup": "ffdhe4096" 00:21:44.409 } 00:21:44.409 } 00:21:44.409 ]' 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.409 22:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.669 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:44.670 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.240 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.500 22:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.761 00:21:45.761 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.761 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.761 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.022 { 00:21:46.022 "cntlid": 125, 00:21:46.022 "qid": 0, 00:21:46.022 "state": "enabled", 00:21:46.022 "thread": "nvmf_tgt_poll_group_000", 00:21:46.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.022 "listen_address": { 00:21:46.022 "trtype": "TCP", 00:21:46.022 "adrfam": "IPv4", 00:21:46.022 "traddr": "10.0.0.2", 00:21:46.022 "trsvcid": "4420" 00:21:46.022 }, 00:21:46.022 "peer_address": { 00:21:46.022 "trtype": "TCP", 00:21:46.022 "adrfam": "IPv4", 00:21:46.022 "traddr": "10.0.0.1", 00:21:46.022 "trsvcid": "42444" 00:21:46.022 }, 00:21:46.022 "auth": { 00:21:46.022 "state": "completed", 00:21:46.022 "digest": "sha512", 00:21:46.022 "dhgroup": "ffdhe4096" 00:21:46.022 } 00:21:46.022 } 00:21:46.022 ]' 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:46.022 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.283 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.283 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.283 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.283 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:46.283 22:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:46.854 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.114 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.115 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.375 00:21:47.375 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.375 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.375 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.636 22:10:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.636 { 00:21:47.636 "cntlid": 127, 00:21:47.636 "qid": 0, 00:21:47.636 "state": "enabled", 00:21:47.636 "thread": "nvmf_tgt_poll_group_000", 00:21:47.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:47.636 "listen_address": { 00:21:47.636 "trtype": "TCP", 00:21:47.636 "adrfam": "IPv4", 00:21:47.636 "traddr": "10.0.0.2", 00:21:47.636 "trsvcid": "4420" 00:21:47.636 }, 00:21:47.636 "peer_address": { 00:21:47.636 "trtype": "TCP", 00:21:47.636 "adrfam": "IPv4", 00:21:47.636 "traddr": "10.0.0.1", 00:21:47.636 "trsvcid": "42466" 00:21:47.636 }, 00:21:47.636 "auth": { 00:21:47.636 "state": "completed", 00:21:47.636 "digest": "sha512", 00:21:47.636 "dhgroup": "ffdhe4096" 00:21:47.636 } 00:21:47.636 } 00:21:47.636 ]' 00:21:47.636 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.636 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.636 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.636 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.636 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.896 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.897 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.897 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.897 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:47.897 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:48.467 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.727 22:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.727 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.728 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.299 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.299 { 00:21:49.299 "cntlid": 129, 00:21:49.299 "qid": 0, 00:21:49.299 "state": "enabled", 00:21:49.299 "thread": "nvmf_tgt_poll_group_000", 00:21:49.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:49.299 "listen_address": { 00:21:49.299 "trtype": "TCP", 00:21:49.299 "adrfam": "IPv4", 00:21:49.299 "traddr": "10.0.0.2", 00:21:49.299 "trsvcid": "4420" 00:21:49.299 }, 00:21:49.299 "peer_address": { 00:21:49.299 "trtype": "TCP", 00:21:49.299 "adrfam": "IPv4", 00:21:49.299 "traddr": "10.0.0.1", 00:21:49.299 "trsvcid": "42494" 00:21:49.299 }, 00:21:49.299 "auth": { 00:21:49.299 "state": "completed", 00:21:49.299 "digest": "sha512", 00:21:49.299 "dhgroup": "ffdhe6144" 00:21:49.299 } 00:21:49.299 } 00:21:49.299 ]' 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.299 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.560 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.561 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.561 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.561 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.561 22:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.561 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:49.561 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.501 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.502 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.502 22:10:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.762 00:21:50.762 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.762 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.762 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.022 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.022 { 00:21:51.022 "cntlid": 131, 00:21:51.022 "qid": 0, 00:21:51.022 "state": "enabled", 00:21:51.022 "thread": "nvmf_tgt_poll_group_000", 00:21:51.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.022 "listen_address": { 00:21:51.022 "trtype": "TCP", 00:21:51.022 "adrfam": "IPv4", 00:21:51.022 "traddr": "10.0.0.2", 00:21:51.022 "trsvcid": "4420" 00:21:51.022 }, 00:21:51.022 "peer_address": { 00:21:51.022 "trtype": "TCP", 00:21:51.022 "adrfam": "IPv4", 00:21:51.022 "traddr": "10.0.0.1", 00:21:51.022 "trsvcid": "42508" 00:21:51.023 }, 00:21:51.023 "auth": { 00:21:51.023 "state": "completed", 00:21:51.023 "digest": "sha512", 00:21:51.023 "dhgroup": "ffdhe6144" 00:21:51.023 } 00:21:51.023 } 00:21:51.023 ]' 00:21:51.023 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.023 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.023 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:51.284 22:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.226 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.486 00:21:52.486 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.486 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.486 22:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.747 { 00:21:52.747 "cntlid": 133, 00:21:52.747 "qid": 0, 00:21:52.747 "state": "enabled", 00:21:52.747 "thread": "nvmf_tgt_poll_group_000", 00:21:52.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:52.747 "listen_address": { 00:21:52.747 "trtype": "TCP", 00:21:52.747 "adrfam": "IPv4", 00:21:52.747 "traddr": "10.0.0.2", 00:21:52.747 "trsvcid": "4420" 00:21:52.747 }, 00:21:52.747 "peer_address": { 00:21:52.747 "trtype": "TCP", 00:21:52.747 "adrfam": "IPv4", 00:21:52.747 "traddr": "10.0.0.1", 00:21:52.747 "trsvcid": "48454" 00:21:52.747 }, 00:21:52.747 "auth": { 00:21:52.747 "state": "completed", 00:21:52.747 "digest": "sha512", 00:21:52.747 "dhgroup": "ffdhe6144" 00:21:52.747 } 00:21:52.747 } 00:21:52.747 ]' 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.747 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.007 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.007 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.007 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.007 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:53.007 22:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.960 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.220 00:21:54.221 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.221 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.221 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.482 { 00:21:54.482 "cntlid": 135, 00:21:54.482 "qid": 0, 00:21:54.482 "state": "enabled", 00:21:54.482 "thread": "nvmf_tgt_poll_group_000", 00:21:54.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.482 "listen_address": { 00:21:54.482 "trtype": "TCP", 00:21:54.482 "adrfam": "IPv4", 00:21:54.482 "traddr": "10.0.0.2", 00:21:54.482 "trsvcid": "4420" 00:21:54.482 }, 00:21:54.482 "peer_address": { 00:21:54.482 "trtype": "TCP", 00:21:54.482 "adrfam": "IPv4", 00:21:54.482 "traddr": "10.0.0.1", 00:21:54.482 "trsvcid": "48488" 00:21:54.482 }, 00:21:54.482 "auth": { 00:21:54.482 "state": "completed", 00:21:54.482 "digest": "sha512", 00:21:54.482 "dhgroup": "ffdhe6144" 00:21:54.482 } 00:21:54.482 } 00:21:54.482 ]' 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.482 22:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.743 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:54.743 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:21:55.314 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.314 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.314 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.314 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.575 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.575 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.575 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.575 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.575 22:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.575 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.146 00:21:56.146 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.146 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.146 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.407 { 00:21:56.407 "cntlid": 137, 00:21:56.407 "qid": 0, 00:21:56.407 "state": "enabled", 00:21:56.407 "thread": "nvmf_tgt_poll_group_000", 00:21:56.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.407 "listen_address": { 00:21:56.407 "trtype": "TCP", 00:21:56.407 "adrfam": "IPv4", 00:21:56.407 "traddr": "10.0.0.2", 00:21:56.407 "trsvcid": "4420" 00:21:56.407 }, 00:21:56.407 "peer_address": { 00:21:56.407 "trtype": "TCP", 00:21:56.407 "adrfam": "IPv4", 00:21:56.407 "traddr": "10.0.0.1", 00:21:56.407 "trsvcid": "48528" 00:21:56.407 }, 00:21:56.407 "auth": { 00:21:56.407 "state": "completed", 00:21:56.407 "digest": "sha512", 00:21:56.407 "dhgroup": "ffdhe8192" 00:21:56.407 } 00:21:56.407 } 00:21:56.407 ]' 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.407 22:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.668 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:56.668 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:21:57.239 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.239 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.239 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.239 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.239 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.240 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.240 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.240 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.501 22:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.071 00:21:58.071 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.071 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.072 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.332 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.332 { 00:21:58.332 "cntlid": 139, 00:21:58.332 "qid": 0, 00:21:58.333 "state": "enabled", 00:21:58.333 "thread": "nvmf_tgt_poll_group_000", 00:21:58.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:58.333 "listen_address": { 00:21:58.333 "trtype": "TCP", 00:21:58.333 "adrfam": "IPv4", 00:21:58.333 "traddr": "10.0.0.2", 00:21:58.333 "trsvcid": "4420" 00:21:58.333 }, 00:21:58.333 "peer_address": { 00:21:58.333 "trtype": "TCP", 00:21:58.333 "adrfam": "IPv4", 00:21:58.333 "traddr": "10.0.0.1", 00:21:58.333 "trsvcid": "48556" 00:21:58.333 }, 00:21:58.333 "auth": { 00:21:58.333 "state": "completed", 00:21:58.333 "digest": "sha512", 00:21:58.333 "dhgroup": "ffdhe8192" 00:21:58.333 } 00:21:58.333 } 00:21:58.333 ]' 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.333 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.594 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:58.594 22:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: --dhchap-ctrl-secret DHHC-1:02:YzEyYTM4MWEyYzk2Y2YwM2Y2ODJjY2ExOWJhNTBjZjYyMjA3MzQ3MzIyNjNlMWZhqhL67g==: 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.164 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.425 22:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.997 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.997 { 00:21:59.997 "cntlid": 141, 00:21:59.997 "qid": 0, 00:21:59.997 "state": "enabled", 00:21:59.997 "thread": "nvmf_tgt_poll_group_000", 00:21:59.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.997 "listen_address": { 00:21:59.997 "trtype": "TCP", 00:21:59.997 "adrfam": "IPv4", 00:21:59.997 "traddr": "10.0.0.2", 00:21:59.997 "trsvcid": "4420" 00:21:59.997 }, 00:21:59.997 "peer_address": { 00:21:59.997 "trtype": "TCP", 00:21:59.997 "adrfam": "IPv4", 00:21:59.997 "traddr": "10.0.0.1", 00:21:59.997 "trsvcid": "48586" 00:21:59.997 }, 00:21:59.997 "auth": { 00:21:59.997 "state": "completed", 00:21:59.997 "digest": "sha512", 00:21:59.997 "dhgroup": "ffdhe8192" 00:21:59.997 } 00:21:59.997 } 00:21:59.997 ]' 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.997 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:22:00.258 22:10:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:01:MzgxMmU2ZGMzYzlkZjY0ODE4YTkzNmQ2YTkxNWQ3YjZpOBic: 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.199 22:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.772 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.772 { 00:22:01.772 "cntlid": 143, 00:22:01.772 "qid": 0, 00:22:01.772 "state": "enabled", 00:22:01.772 "thread": "nvmf_tgt_poll_group_000", 00:22:01.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.772 "listen_address": { 00:22:01.772 "trtype": "TCP", 00:22:01.772 "adrfam": "IPv4", 00:22:01.772 "traddr": "10.0.0.2", 00:22:01.772 "trsvcid": "4420" 00:22:01.772 }, 00:22:01.772 "peer_address": { 00:22:01.772 "trtype": "TCP", 00:22:01.772 "adrfam": "IPv4", 00:22:01.772 "traddr": "10.0.0.1", 00:22:01.772 "trsvcid": "48622" 00:22:01.772 }, 00:22:01.772 "auth": { 00:22:01.772 "state": "completed", 00:22:01.772 "digest": "sha512", 00:22:01.772 "dhgroup": "ffdhe8192" 00:22:01.772 } 00:22:01.772 } 00:22:01.772 ]' 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.772 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.034 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.294 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:02.294 22:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.865 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:02.866 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:02.866 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:02.866 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.866 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:02.866 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.127 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.387 00:22:03.387 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.387 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.387 22:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.648 { 00:22:03.648 "cntlid": 145, 00:22:03.648 "qid": 0, 00:22:03.648 "state": "enabled", 00:22:03.648 "thread": "nvmf_tgt_poll_group_000", 00:22:03.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:03.648 "listen_address": { 00:22:03.648 "trtype": "TCP", 00:22:03.648 "adrfam": "IPv4", 00:22:03.648 "traddr": "10.0.0.2", 00:22:03.648 "trsvcid": "4420" 00:22:03.648 }, 00:22:03.648 "peer_address": { 00:22:03.648 "trtype": "TCP", 00:22:03.648 "adrfam": "IPv4", 00:22:03.648 "traddr": "10.0.0.1", 00:22:03.648 "trsvcid": "39498" 00:22:03.648 }, 00:22:03.648 "auth": { 00:22:03.648 "state": "completed", 00:22:03.648 "digest": "sha512", 00:22:03.648 "dhgroup": "ffdhe8192" 00:22:03.648 } 00:22:03.648 } 00:22:03.648 ]' 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.648 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:22:03.908 22:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmQ1ZjQxOTVjY2Y2ZTc4YjVmOTEyNjJmZWQxOWRmYmFlNzM5NGY1YjQwMTJjOGQxomFB5g==: --dhchap-ctrl-secret DHHC-1:03:YTA0NmYyZGRhMDM2YTU2NjY3MTQ4MjExZGI4Njg1NTY1ZGE0NWFiODRmMDhjMTg4OWIxZDdkMjkwNDIxODZjMO12jMc=: 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:04.849 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:05.109 request: 00:22:05.109 { 00:22:05.109 "name": "nvme0", 00:22:05.109 "trtype": "tcp", 00:22:05.109 "traddr": "10.0.0.2", 00:22:05.109 "adrfam": "ipv4", 00:22:05.109 "trsvcid": "4420", 00:22:05.109 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:05.109 "prchk_reftag": false, 00:22:05.109 "prchk_guard": false, 00:22:05.109 "hdgst": false, 00:22:05.109 "ddgst": false, 00:22:05.109 "dhchap_key": "key2", 00:22:05.109 "allow_unrecognized_csi": false, 00:22:05.109 "method": "bdev_nvme_attach_controller", 00:22:05.109 "req_id": 1 00:22:05.109 } 00:22:05.109 Got JSON-RPC error response 00:22:05.109 response: 00:22:05.109 { 00:22:05.109 "code": -5, 00:22:05.109 "message": "Input/output error" 00:22:05.109 } 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.109 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:05.680 request: 00:22:05.680 { 00:22:05.680 "name": "nvme0", 00:22:05.680 "trtype": "tcp", 00:22:05.680 "traddr": "10.0.0.2", 00:22:05.680 "adrfam": "ipv4", 00:22:05.680 "trsvcid": "4420", 00:22:05.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:05.680 "prchk_reftag": false, 00:22:05.680 "prchk_guard": false, 00:22:05.680 "hdgst": false, 00:22:05.680 "ddgst": false, 00:22:05.680 "dhchap_key": "key1", 00:22:05.680 "dhchap_ctrlr_key": "ckey2", 00:22:05.680 "allow_unrecognized_csi": false, 00:22:05.680 "method": "bdev_nvme_attach_controller", 00:22:05.680 "req_id": 1 00:22:05.680 } 00:22:05.680 Got JSON-RPC error response 00:22:05.680 response: 00:22:05.680 { 00:22:05.680 "code": -5, 00:22:05.680 "message": "Input/output error" 00:22:05.680 } 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.680 22:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.940 request: 00:22:05.940 { 00:22:05.940 "name": "nvme0", 00:22:05.940 "trtype": "tcp", 00:22:05.940 "traddr": "10.0.0.2", 00:22:05.940 "adrfam": "ipv4", 00:22:05.940 "trsvcid": "4420", 00:22:05.940 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:05.940 "prchk_reftag": false, 00:22:05.940 "prchk_guard": false, 00:22:05.940 "hdgst": false, 00:22:05.940 "ddgst": false, 00:22:05.940 "dhchap_key": "key1", 00:22:05.940 "dhchap_ctrlr_key": "ckey1", 00:22:05.940 "allow_unrecognized_csi": false, 00:22:05.940 "method": "bdev_nvme_attach_controller", 00:22:05.940 "req_id": 1 00:22:05.940 } 00:22:05.940 Got JSON-RPC error response 00:22:05.940 response: 00:22:05.940 { 00:22:05.940 "code": -5, 00:22:05.940 "message": "Input/output error" 00:22:05.940 } 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.940 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3505471 ']' 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3505471' 00:22:06.200 killing process with pid 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3505471 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3531752 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3531752 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3531752 ']' 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.200 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.201 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.201 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.201 22:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3531752 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3531752 ']' 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 null0 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fUM 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.EpH ]] 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EpH 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nvB 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.318 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.FJt ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FJt 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9nm 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Ioh ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Ioh 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:07.599 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.a3T 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.600 22:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.171 nvme0n1 00:22:08.171 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.171 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.171 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.433 { 00:22:08.433 "cntlid": 1, 00:22:08.433 "qid": 0, 00:22:08.433 "state": "enabled", 00:22:08.433 "thread": "nvmf_tgt_poll_group_000", 00:22:08.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:08.433 "listen_address": { 00:22:08.433 "trtype": "TCP", 00:22:08.433 "adrfam": "IPv4", 00:22:08.433 "traddr": "10.0.0.2", 00:22:08.433 "trsvcid": "4420" 00:22:08.433 }, 00:22:08.433 "peer_address": { 00:22:08.433 "trtype": "TCP", 00:22:08.433 "adrfam": "IPv4", 00:22:08.433 "traddr": "10.0.0.1", 00:22:08.433 "trsvcid": "39568" 00:22:08.433 }, 00:22:08.433 "auth": { 00:22:08.433 "state": "completed", 00:22:08.433 "digest": "sha512", 00:22:08.433 "dhgroup": "ffdhe8192" 00:22:08.433 } 00:22:08.433 } 00:22:08.433 ]' 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.433 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.695 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.695 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.695 22:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.695 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:08.695 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:09.640 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.641 22:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.906 request: 00:22:09.906 { 00:22:09.906 "name": "nvme0", 00:22:09.906 "trtype": "tcp", 00:22:09.906 "traddr": "10.0.0.2", 00:22:09.906 "adrfam": "ipv4", 00:22:09.906 "trsvcid": "4420", 00:22:09.906 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.906 "prchk_reftag": false, 00:22:09.906 "prchk_guard": false, 00:22:09.906 "hdgst": false, 00:22:09.906 "ddgst": false, 00:22:09.906 "dhchap_key": "key3", 00:22:09.906 "allow_unrecognized_csi": false, 00:22:09.906 "method": "bdev_nvme_attach_controller", 00:22:09.906 "req_id": 1 00:22:09.906 } 00:22:09.906 Got JSON-RPC error response 00:22:09.906 response: 00:22:09.906 { 00:22:09.906 "code": -5, 00:22:09.906 "message": "Input/output error" 00:22:09.906 } 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.906 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.168 request: 00:22:10.168 { 00:22:10.168 "name": "nvme0", 00:22:10.168 "trtype": "tcp", 00:22:10.168 "traddr": "10.0.0.2", 00:22:10.168 "adrfam": "ipv4", 00:22:10.168 "trsvcid": "4420", 00:22:10.168 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:10.168 "prchk_reftag": false, 00:22:10.168 "prchk_guard": false, 00:22:10.168 "hdgst": false, 00:22:10.168 "ddgst": false, 00:22:10.168 "dhchap_key": "key3", 00:22:10.168 "allow_unrecognized_csi": false, 00:22:10.168 "method": "bdev_nvme_attach_controller", 00:22:10.168 "req_id": 1 00:22:10.168 } 00:22:10.168 Got JSON-RPC error response 00:22:10.168 response: 00:22:10.168 { 00:22:10.168 "code": -5, 00:22:10.168 "message": "Input/output error" 00:22:10.168 } 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.168 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.429 22:10:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:10.690 request: 00:22:10.690 { 00:22:10.690 "name": "nvme0", 00:22:10.690 "trtype": "tcp", 00:22:10.690 "traddr": "10.0.0.2", 00:22:10.690 "adrfam": "ipv4", 00:22:10.690 "trsvcid": "4420", 00:22:10.690 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:10.690 "prchk_reftag": false, 00:22:10.690 "prchk_guard": false, 00:22:10.690 "hdgst": false, 00:22:10.690 "ddgst": false, 00:22:10.690 "dhchap_key": "key0", 00:22:10.690 "dhchap_ctrlr_key": "key1", 00:22:10.690 "allow_unrecognized_csi": false, 00:22:10.690 "method": "bdev_nvme_attach_controller", 00:22:10.690 "req_id": 1 00:22:10.690 } 00:22:10.690 Got JSON-RPC error response 00:22:10.690 response: 00:22:10.690 { 00:22:10.690 "code": -5, 00:22:10.690 "message": "Input/output error" 00:22:10.690 } 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:10.690 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:10.951 nvme0n1 00:22:10.951 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:10.951 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:10.951 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:11.211 22:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:12.154 nvme0n1 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:12.154 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.416 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.416 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:12.416 22:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: --dhchap-ctrl-secret DHHC-1:03:OWNmYjkwOTgxNmEzMzQxOTg1MjEyMzg1NTkyNWM1MzdiODBjNGVmYWI3YzQ3MzhjNWJkYTVjOTRkNmM0YWQ4Mrvik9M=: 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.987 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.248 22:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:13.821 request: 00:22:13.821 { 00:22:13.821 "name": "nvme0", 00:22:13.821 "trtype": "tcp", 00:22:13.821 "traddr": "10.0.0.2", 00:22:13.821 "adrfam": "ipv4", 00:22:13.821 "trsvcid": "4420", 00:22:13.821 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.821 "prchk_reftag": false, 00:22:13.821 "prchk_guard": false, 00:22:13.821 "hdgst": false, 00:22:13.821 "ddgst": false, 00:22:13.821 "dhchap_key": "key1", 00:22:13.821 "allow_unrecognized_csi": false, 00:22:13.821 "method": "bdev_nvme_attach_controller", 00:22:13.821 "req_id": 1 00:22:13.821 } 00:22:13.821 Got JSON-RPC error response 00:22:13.821 response: 00:22:13.821 { 00:22:13.821 "code": -5, 00:22:13.821 "message": "Input/output error" 00:22:13.821 } 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:13.821 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:14.394 nvme0n1 00:22:14.394 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:14.394 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:14.394 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.654 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.654 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.654 22:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:14.915 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:14.915 nvme0n1 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.176 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: '' 2s 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: ]] 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjJmYjBlZDVkZWQ1MmZlZThkZTg3NmE2NDI5MjE2MjY/QQcX: 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:15.437 22:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:17.350 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: 2s 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: ]] 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZWZiNmExODllMWNiNjkyODljNzAyMDQzZTQ3YmQ5MGIxYTZkYWE5YTYxYTM0Zjg0A3jACg==: 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:17.611 22:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.522 22:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:20.463 nvme0n1 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.463 22:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:20.723 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:20.723 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:20.723 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:20.985 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.246 22:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:21.816 request: 00:22:21.817 { 00:22:21.817 "name": "nvme0", 00:22:21.817 "dhchap_key": "key1", 00:22:21.817 "dhchap_ctrlr_key": "key3", 00:22:21.817 "method": "bdev_nvme_set_keys", 00:22:21.817 "req_id": 1 00:22:21.817 } 00:22:21.817 Got JSON-RPC error response 00:22:21.817 response: 00:22:21.817 { 00:22:21.817 "code": -13, 00:22:21.817 "message": "Permission denied" 00:22:21.817 } 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:21.817 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.077 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:22.077 22:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.019 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.280 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.280 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.280 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.280 22:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:23.850 nvme0n1 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:23.850 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:24.421 request: 00:22:24.421 { 00:22:24.421 "name": "nvme0", 00:22:24.421 "dhchap_key": "key2", 00:22:24.421 "dhchap_ctrlr_key": "key0", 00:22:24.421 "method": "bdev_nvme_set_keys", 00:22:24.421 "req_id": 1 00:22:24.421 } 00:22:24.421 Got JSON-RPC error response 00:22:24.421 response: 00:22:24.421 { 00:22:24.421 "code": -13, 00:22:24.421 "message": "Permission denied" 00:22:24.421 } 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:24.421 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.681 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:24.682 22:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:25.623 22:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:25.623 22:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:25.623 22:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3505684 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3505684 ']' 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3505684 00:22:25.623 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3505684 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3505684' 00:22:25.884 killing process with pid 3505684 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3505684 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3505684 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.884 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.145 rmmod nvme_tcp 00:22:26.145 rmmod nvme_fabrics 00:22:26.145 rmmod nvme_keyring 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3531752 ']' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3531752 ']' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3531752' 00:22:26.145 killing process with pid 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3531752 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.145 22:10:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.705 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fUM /tmp/spdk.key-sha256.nvB /tmp/spdk.key-sha384.9nm /tmp/spdk.key-sha512.a3T /tmp/spdk.key-sha512.EpH /tmp/spdk.key-sha384.FJt /tmp/spdk.key-sha256.Ioh '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:28.706 00:22:28.706 real 2m36.535s 00:22:28.706 user 5m52.129s 00:22:28.706 sys 0m24.757s 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.706 ************************************ 00:22:28.706 END TEST nvmf_auth_target 00:22:28.706 ************************************ 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.706 ************************************ 00:22:28.706 START TEST nvmf_bdevio_no_huge 00:22:28.706 ************************************ 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:28.706 * Looking for test storage... 00:22:28.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:28.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.706 --rc genhtml_branch_coverage=1 00:22:28.706 --rc genhtml_function_coverage=1 00:22:28.706 --rc genhtml_legend=1 00:22:28.706 --rc geninfo_all_blocks=1 00:22:28.706 --rc geninfo_unexecuted_blocks=1 00:22:28.706 00:22:28.706 ' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:28.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.706 --rc genhtml_branch_coverage=1 00:22:28.706 --rc genhtml_function_coverage=1 00:22:28.706 --rc genhtml_legend=1 00:22:28.706 --rc geninfo_all_blocks=1 00:22:28.706 --rc geninfo_unexecuted_blocks=1 00:22:28.706 00:22:28.706 ' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:28.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.706 --rc genhtml_branch_coverage=1 00:22:28.706 --rc genhtml_function_coverage=1 00:22:28.706 --rc genhtml_legend=1 00:22:28.706 --rc geninfo_all_blocks=1 00:22:28.706 --rc geninfo_unexecuted_blocks=1 00:22:28.706 00:22:28.706 ' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:28.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.706 --rc genhtml_branch_coverage=1 00:22:28.706 --rc genhtml_function_coverage=1 00:22:28.706 --rc genhtml_legend=1 00:22:28.706 --rc geninfo_all_blocks=1 00:22:28.706 --rc geninfo_unexecuted_blocks=1 00:22:28.706 00:22:28.706 ' 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.706 22:10:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.706 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.707 22:10:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:36.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:36.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:36.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:36.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.852 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:36.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:22:36.853 00:22:36.853 --- 10.0.0.2 ping statistics --- 00:22:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.853 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:36.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:22:36.853 00:22:36.853 --- 10.0.0.1 ping statistics --- 00:22:36.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.853 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3539910 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3539910 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3539910 ']' 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.853 22:10:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:36.853 [2024-10-12 22:10:54.641836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:36.853 [2024-10-12 22:10:54.641903] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:36.853 [2024-10-12 22:10:54.734259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.853 [2024-10-12 22:10:54.815252] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.853 [2024-10-12 22:10:54.815311] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.853 [2024-10-12 22:10:54.815320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.853 [2024-10-12 22:10:54.815327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.853 [2024-10-12 22:10:54.815334] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.853 [2024-10-12 22:10:54.815525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:36.853 [2024-10-12 22:10:54.815684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:22:36.853 [2024-10-12 22:10:54.815847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.853 [2024-10-12 22:10:54.815847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.114 [2024-10-12 22:10:55.518144] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.114 Malloc0 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.114 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.115 [2024-10-12 22:10:55.572313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:37.115 { 00:22:37.115 "params": { 00:22:37.115 "name": "Nvme$subsystem", 00:22:37.115 "trtype": "$TEST_TRANSPORT", 00:22:37.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.115 "adrfam": "ipv4", 00:22:37.115 "trsvcid": "$NVMF_PORT", 00:22:37.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.115 "hdgst": ${hdgst:-false}, 00:22:37.115 "ddgst": ${ddgst:-false} 00:22:37.115 }, 00:22:37.115 "method": "bdev_nvme_attach_controller" 00:22:37.115 } 00:22:37.115 EOF 00:22:37.115 )") 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:22:37.115 22:10:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:37.115 "params": { 00:22:37.115 "name": "Nvme1", 00:22:37.115 "trtype": "tcp", 00:22:37.115 "traddr": "10.0.0.2", 00:22:37.115 "adrfam": "ipv4", 00:22:37.115 "trsvcid": "4420", 00:22:37.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.115 "hdgst": false, 00:22:37.115 "ddgst": false 00:22:37.115 }, 00:22:37.115 "method": "bdev_nvme_attach_controller" 00:22:37.115 }' 00:22:37.375 [2024-10-12 22:10:55.630904] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:37.376 [2024-10-12 22:10:55.630972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3540223 ] 00:22:37.376 [2024-10-12 22:10:55.712843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:37.376 [2024-10-12 22:10:55.791900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.376 [2024-10-12 22:10:55.792066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.376 [2024-10-12 22:10:55.792067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.636 I/O targets: 00:22:37.636 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:37.636 00:22:37.636 00:22:37.636 CUnit - A unit testing framework for C - Version 2.1-3 00:22:37.636 http://cunit.sourceforge.net/ 00:22:37.636 00:22:37.636 00:22:37.636 Suite: bdevio tests on: Nvme1n1 00:22:37.897 Test: blockdev write read block ...passed 00:22:37.897 Test: blockdev write zeroes read block ...passed 00:22:37.897 Test: blockdev write zeroes read no split ...passed 00:22:37.897 Test: blockdev write zeroes read split ...passed 00:22:37.897 Test: blockdev write zeroes read split partial ...passed 00:22:37.897 Test: blockdev reset ...[2024-10-12 22:10:56.318844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:37.897 [2024-10-12 22:10:56.318944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce15f0 (9): Bad file descriptor 00:22:37.897 [2024-10-12 22:10:56.372116] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:37.897 passed 00:22:37.897 Test: blockdev write read 8 blocks ...passed 00:22:37.897 Test: blockdev write read size > 128k ...passed 00:22:37.897 Test: blockdev write read invalid size ...passed 00:22:38.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:38.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:38.159 Test: blockdev write read max offset ...passed 00:22:38.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:38.159 Test: blockdev writev readv 8 blocks ...passed 00:22:38.159 Test: blockdev writev readv 30 x 1block ...passed 00:22:38.159 Test: blockdev writev readv block ...passed 00:22:38.159 Test: blockdev writev readv size > 128k ...passed 00:22:38.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:38.159 Test: blockdev comparev and writev ...[2024-10-12 22:10:56.552168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.552225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.552242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.552251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.552665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.552679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.552693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.552701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.553109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.553123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.553137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.553145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.553551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.553564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.553578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:38.159 [2024-10-12 22:10:56.553587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.159 passed 00:22:38.159 Test: blockdev nvme passthru rw ...passed 00:22:38.159 Test: blockdev nvme passthru vendor specific ...[2024-10-12 22:10:56.636534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.159 [2024-10-12 22:10:56.636552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.636801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.159 [2024-10-12 22:10:56.636813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.637020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.159 [2024-10-12 22:10:56.637031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.159 [2024-10-12 22:10:56.637257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:38.159 [2024-10-12 22:10:56.637267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.159 passed 00:22:38.420 Test: blockdev nvme admin passthru ...passed 00:22:38.420 Test: blockdev copy ...passed 00:22:38.420 00:22:38.420 Run Summary: Type Total Ran Passed Failed Inactive 00:22:38.420 suites 1 1 n/a 0 0 00:22:38.420 tests 23 23 23 0 0 00:22:38.420 asserts 152 152 152 0 n/a 00:22:38.420 00:22:38.420 Elapsed time = 1.206 seconds 00:22:38.680 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.680 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.680 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:38.680 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.681 22:10:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.681 rmmod nvme_tcp 00:22:38.681 rmmod nvme_fabrics 00:22:38.681 rmmod nvme_keyring 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3539910 ']' 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3539910 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3539910 ']' 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3539910 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3539910 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3539910' 00:22:38.681 killing process with pid 3539910 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3539910 00:22:38.681 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3539910 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.941 22:10:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.489 00:22:41.489 real 0m12.625s 00:22:41.489 user 0m14.437s 00:22:41.489 sys 0m6.796s 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.489 ************************************ 00:22:41.489 END TEST nvmf_bdevio_no_huge 00:22:41.489 ************************************ 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.489 ************************************ 00:22:41.489 START TEST nvmf_tls 00:22:41.489 ************************************ 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:41.489 * Looking for test storage... 00:22:41.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.489 --rc genhtml_branch_coverage=1 00:22:41.489 --rc genhtml_function_coverage=1 00:22:41.489 --rc genhtml_legend=1 00:22:41.489 --rc geninfo_all_blocks=1 00:22:41.489 --rc geninfo_unexecuted_blocks=1 00:22:41.489 00:22:41.489 ' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.489 --rc genhtml_branch_coverage=1 00:22:41.489 --rc genhtml_function_coverage=1 00:22:41.489 --rc genhtml_legend=1 00:22:41.489 --rc geninfo_all_blocks=1 00:22:41.489 --rc geninfo_unexecuted_blocks=1 00:22:41.489 00:22:41.489 ' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.489 --rc genhtml_branch_coverage=1 00:22:41.489 --rc genhtml_function_coverage=1 00:22:41.489 --rc genhtml_legend=1 00:22:41.489 --rc geninfo_all_blocks=1 00:22:41.489 --rc geninfo_unexecuted_blocks=1 00:22:41.489 00:22:41.489 ' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:41.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.489 --rc genhtml_branch_coverage=1 00:22:41.489 --rc genhtml_function_coverage=1 00:22:41.489 --rc genhtml_legend=1 00:22:41.489 --rc geninfo_all_blocks=1 00:22:41.489 --rc geninfo_unexecuted_blocks=1 00:22:41.489 00:22:41.489 ' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.489 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.490 22:10:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.641 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.642 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.642 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.642 22:11:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:22:49.642 00:22:49.642 --- 10.0.0.2 ping statistics --- 00:22:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.642 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:22:49.642 00:22:49.642 --- 10.0.0.1 ping statistics --- 00:22:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.642 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3544614 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3544614 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3544614 ']' 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.642 22:11:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.642 [2024-10-12 22:11:07.334923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:49.642 [2024-10-12 22:11:07.334993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.642 [2024-10-12 22:11:07.420942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.642 [2024-10-12 22:11:07.468246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.642 [2024-10-12 22:11:07.468296] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.642 [2024-10-12 22:11:07.468305] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.642 [2024-10-12 22:11:07.468312] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.642 [2024-10-12 22:11:07.468318] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.642 [2024-10-12 22:11:07.468351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:49.903 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:49.903 true 00:22:50.163 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.163 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:50.163 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:50.163 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:50.163 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:50.424 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.424 22:11:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:50.685 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:50.685 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:50.685 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:50.946 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:51.207 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:51.207 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:51.207 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:51.468 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.468 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:51.468 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:51.468 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:51.468 22:11:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:51.729 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.729 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.BySZAEN6VA 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.JAM0IBa7AE 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BySZAEN6VA 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.JAM0IBa7AE 00:22:51.991 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.252 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:52.513 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.BySZAEN6VA 00:22:52.513 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BySZAEN6VA 00:22:52.513 22:11:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.774 [2024-10-12 22:11:11.011229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.774 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.774 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:53.034 [2024-10-12 22:11:11.380111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.034 [2024-10-12 22:11:11.380307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.034 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.294 malloc0 00:22:53.294 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.554 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BySZAEN6VA 00:22:53.554 22:11:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.814 22:11:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BySZAEN6VA 00:23:03.818 Initializing NVMe Controllers 00:23:03.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:03.818 Initialization complete. Launching workers. 00:23:03.818 ======================================================== 00:23:03.818 Latency(us) 00:23:03.818 Device Information : IOPS MiB/s Average min max 00:23:03.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18846.87 73.62 3395.99 1131.58 5164.17 00:23:03.818 ======================================================== 00:23:03.818 Total : 18846.87 73.62 3395.99 1131.58 5164.17 00:23:03.818 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BySZAEN6VA 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BySZAEN6VA 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3547665 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3547665 /var/tmp/bdevperf.sock 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3547665 ']' 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.818 22:11:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.080 [2024-10-12 22:11:22.335917] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:04.080 [2024-10-12 22:11:22.335974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547665 ] 00:23:04.080 [2024-10-12 22:11:22.414647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.080 [2024-10-12 22:11:22.444858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.769 22:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.769 22:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:04.770 22:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BySZAEN6VA 00:23:05.036 22:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.036 [2024-10-12 22:11:23.461836] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.297 TLSTESTn1 00:23:05.297 22:11:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.297 Running I/O for 10 seconds... 00:23:07.185 4464.00 IOPS, 17.44 MiB/s [2024-10-12T20:11:27.060Z] 4938.50 IOPS, 19.29 MiB/s [2024-10-12T20:11:28.003Z] 4844.67 IOPS, 18.92 MiB/s [2024-10-12T20:11:28.944Z] 4970.00 IOPS, 19.41 MiB/s [2024-10-12T20:11:29.888Z] 5143.20 IOPS, 20.09 MiB/s [2024-10-12T20:11:30.830Z] 5246.67 IOPS, 20.49 MiB/s [2024-10-12T20:11:31.773Z] 5295.86 IOPS, 20.69 MiB/s [2024-10-12T20:11:32.717Z] 5364.00 IOPS, 20.95 MiB/s [2024-10-12T20:11:34.104Z] 5351.00 IOPS, 20.90 MiB/s [2024-10-12T20:11:34.104Z] 5374.10 IOPS, 20.99 MiB/s 00:23:15.615 Latency(us) 00:23:15.615 [2024-10-12T20:11:34.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.615 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.615 Verification LBA range: start 0x0 length 0x2000 00:23:15.615 TLSTESTn1 : 10.01 5379.58 21.01 0.00 0.00 23759.83 6007.47 88255.15 00:23:15.615 [2024-10-12T20:11:34.104Z] =================================================================================================================== 00:23:15.615 [2024-10-12T20:11:34.104Z] Total : 5379.58 21.01 0.00 0.00 23759.83 6007.47 88255.15 00:23:15.615 { 00:23:15.615 "results": [ 00:23:15.615 { 00:23:15.615 "job": "TLSTESTn1", 00:23:15.615 "core_mask": "0x4", 00:23:15.615 "workload": "verify", 00:23:15.615 "status": "finished", 00:23:15.615 "verify_range": { 00:23:15.615 "start": 0, 00:23:15.615 "length": 8192 00:23:15.615 }, 00:23:15.615 "queue_depth": 128, 00:23:15.615 "io_size": 4096, 00:23:15.615 "runtime": 10.013613, 00:23:15.615 "iops": 5379.576782126491, 00:23:15.615 "mibps": 21.013971805181605, 00:23:15.615 "io_failed": 0, 00:23:15.615 "io_timeout": 0, 00:23:15.615 "avg_latency_us": 23759.834609144404, 00:23:15.615 "min_latency_us": 6007.466666666666, 00:23:15.615 "max_latency_us": 88255.14666666667 00:23:15.615 } 00:23:15.615 ], 00:23:15.615 "core_count": 1 00:23:15.615 } 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3547665 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3547665 ']' 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3547665 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3547665 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3547665' 00:23:15.615 killing process with pid 3547665 00:23:15.615 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3547665 00:23:15.615 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.616 00:23:15.616 Latency(us) 00:23:15.616 [2024-10-12T20:11:34.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.616 [2024-10-12T20:11:34.105Z] =================================================================================================================== 00:23:15.616 [2024-10-12T20:11:34.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3547665 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAM0IBa7AE 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAM0IBa7AE 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JAM0IBa7AE 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JAM0IBa7AE 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3549799 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3549799 /var/tmp/bdevperf.sock 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3549799 ']' 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.616 22:11:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.616 [2024-10-12 22:11:33.955321] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.616 [2024-10-12 22:11:33.955378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549799 ] 00:23:15.616 [2024-10-12 22:11:34.030891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.616 [2024-10-12 22:11:34.057820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.560 22:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.560 22:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:16.560 22:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JAM0IBa7AE 00:23:16.560 22:11:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.821 [2024-10-12 22:11:35.074844] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.821 [2024-10-12 22:11:35.080495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.821 [2024-10-12 22:11:35.080952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3bf0 (107): Transport endpoint is not connected 00:23:16.821 [2024-10-12 22:11:35.081948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb3bf0 (9): Bad file descriptor 00:23:16.821 [2024-10-12 22:11:35.082950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:16.821 [2024-10-12 22:11:35.082959] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.821 [2024-10-12 22:11:35.082965] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:16.821 [2024-10-12 22:11:35.082973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:16.821 request: 00:23:16.821 { 00:23:16.821 "name": "TLSTEST", 00:23:16.821 "trtype": "tcp", 00:23:16.821 "traddr": "10.0.0.2", 00:23:16.821 "adrfam": "ipv4", 00:23:16.821 "trsvcid": "4420", 00:23:16.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.821 "prchk_reftag": false, 00:23:16.821 "prchk_guard": false, 00:23:16.821 "hdgst": false, 00:23:16.821 "ddgst": false, 00:23:16.821 "psk": "key0", 00:23:16.821 "allow_unrecognized_csi": false, 00:23:16.821 "method": "bdev_nvme_attach_controller", 00:23:16.821 "req_id": 1 00:23:16.821 } 00:23:16.821 Got JSON-RPC error response 00:23:16.821 response: 00:23:16.821 { 00:23:16.821 "code": -5, 00:23:16.821 "message": "Input/output error" 00:23:16.821 } 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3549799 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3549799 ']' 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3549799 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3549799 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3549799' 00:23:16.821 killing process with pid 3549799 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3549799 00:23:16.821 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.821 00:23:16.821 Latency(us) 00:23:16.821 [2024-10-12T20:11:35.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.821 [2024-10-12T20:11:35.310Z] =================================================================================================================== 00:23:16.821 [2024-10-12T20:11:35.310Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3549799 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BySZAEN6VA 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BySZAEN6VA 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BySZAEN6VA 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BySZAEN6VA 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3550036 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3550036 /var/tmp/bdevperf.sock 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3550036 ']' 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.821 22:11:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.082 [2024-10-12 22:11:35.336255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:17.082 [2024-10-12 22:11:35.336307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550036 ] 00:23:17.082 [2024-10-12 22:11:35.413086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.082 [2024-10-12 22:11:35.438918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.656 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.656 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:17.656 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BySZAEN6VA 00:23:17.917 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:18.178 [2024-10-12 22:11:36.464184] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.178 [2024-10-12 22:11:36.473810] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:18.178 [2024-10-12 22:11:36.473830] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:18.178 [2024-10-12 22:11:36.473850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:18.178 [2024-10-12 22:11:36.474552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1681bf0 (107): Transport endpoint is not connected 00:23:18.178 [2024-10-12 22:11:36.475549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1681bf0 (9): Bad file descriptor 00:23:18.178 [2024-10-12 22:11:36.476550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.178 [2024-10-12 22:11:36.476560] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:18.178 [2024-10-12 22:11:36.476566] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:18.178 [2024-10-12 22:11:36.476575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.178 request: 00:23:18.178 { 00:23:18.178 "name": "TLSTEST", 00:23:18.178 "trtype": "tcp", 00:23:18.178 "traddr": "10.0.0.2", 00:23:18.178 "adrfam": "ipv4", 00:23:18.178 "trsvcid": "4420", 00:23:18.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.178 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.178 "prchk_reftag": false, 00:23:18.178 "prchk_guard": false, 00:23:18.178 "hdgst": false, 00:23:18.178 "ddgst": false, 00:23:18.178 "psk": "key0", 00:23:18.178 "allow_unrecognized_csi": false, 00:23:18.178 "method": "bdev_nvme_attach_controller", 00:23:18.178 "req_id": 1 00:23:18.178 } 00:23:18.178 Got JSON-RPC error response 00:23:18.178 response: 00:23:18.178 { 00:23:18.178 "code": -5, 00:23:18.178 "message": "Input/output error" 00:23:18.178 } 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3550036 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3550036 ']' 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3550036 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550036 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550036' 00:23:18.178 killing process with pid 3550036 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3550036 00:23:18.178 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.178 00:23:18.178 Latency(us) 00:23:18.178 [2024-10-12T20:11:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.178 [2024-10-12T20:11:36.667Z] =================================================================================================================== 00:23:18.178 [2024-10-12T20:11:36.667Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.178 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3550036 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BySZAEN6VA 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BySZAEN6VA 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BySZAEN6VA 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BySZAEN6VA 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3550377 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3550377 /var/tmp/bdevperf.sock 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3550377 ']' 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.439 22:11:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.439 [2024-10-12 22:11:36.727148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:18.440 [2024-10-12 22:11:36.727202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550377 ] 00:23:18.440 [2024-10-12 22:11:36.804865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.440 [2024-10-12 22:11:36.830847] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.384 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.384 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.384 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BySZAEN6VA 00:23:19.384 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.384 [2024-10-12 22:11:37.863953] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.384 [2024-10-12 22:11:37.868395] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:19.384 [2024-10-12 22:11:37.868413] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:19.384 [2024-10-12 22:11:37.868432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:19.384 [2024-10-12 22:11:37.869078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60bf0 (107): Transport endpoint is not connected 00:23:19.384 [2024-10-12 22:11:37.870073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b60bf0 (9): Bad file descriptor 00:23:19.384 [2024-10-12 22:11:37.871075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:19.384 [2024-10-12 22:11:37.871086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:19.384 [2024-10-12 22:11:37.871092] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:19.384 [2024-10-12 22:11:37.871100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:19.647 request: 00:23:19.647 { 00:23:19.647 "name": "TLSTEST", 00:23:19.647 "trtype": "tcp", 00:23:19.647 "traddr": "10.0.0.2", 00:23:19.647 "adrfam": "ipv4", 00:23:19.647 "trsvcid": "4420", 00:23:19.647 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.647 "prchk_reftag": false, 00:23:19.647 "prchk_guard": false, 00:23:19.647 "hdgst": false, 00:23:19.647 "ddgst": false, 00:23:19.647 "psk": "key0", 00:23:19.647 "allow_unrecognized_csi": false, 00:23:19.647 "method": "bdev_nvme_attach_controller", 00:23:19.647 "req_id": 1 00:23:19.647 } 00:23:19.647 Got JSON-RPC error response 00:23:19.647 response: 00:23:19.647 { 00:23:19.647 "code": -5, 00:23:19.647 "message": "Input/output error" 00:23:19.647 } 00:23:19.647 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3550377 00:23:19.647 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3550377 ']' 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3550377 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550377 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550377' 00:23:19.648 killing process with pid 3550377 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3550377 00:23:19.648 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.648 00:23:19.648 Latency(us) 00:23:19.648 [2024-10-12T20:11:38.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.648 [2024-10-12T20:11:38.137Z] =================================================================================================================== 00:23:19.648 [2024-10-12T20:11:38.137Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.648 22:11:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3550377 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:19.648 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3550718 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3550718 /var/tmp/bdevperf.sock 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3550718 ']' 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.649 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.649 [2024-10-12 22:11:38.120953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:19.649 [2024-10-12 22:11:38.121007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550718 ] 00:23:19.913 [2024-10-12 22:11:38.197817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.913 [2024-10-12 22:11:38.225110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.913 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.913 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:19.913 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:20.173 [2024-10-12 22:11:38.448060] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:20.173 [2024-10-12 22:11:38.448088] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:20.173 request: 00:23:20.173 { 00:23:20.173 "name": "key0", 00:23:20.173 "path": "", 00:23:20.173 "method": "keyring_file_add_key", 00:23:20.173 "req_id": 1 00:23:20.173 } 00:23:20.173 Got JSON-RPC error response 00:23:20.173 response: 00:23:20.173 { 00:23:20.173 "code": -1, 00:23:20.173 "message": "Operation not permitted" 00:23:20.173 } 00:23:20.173 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.173 [2024-10-12 22:11:38.632598] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.173 [2024-10-12 22:11:38.632620] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:20.173 request: 00:23:20.173 { 00:23:20.173 "name": "TLSTEST", 00:23:20.173 "trtype": "tcp", 00:23:20.173 "traddr": "10.0.0.2", 00:23:20.173 "adrfam": "ipv4", 00:23:20.173 "trsvcid": "4420", 00:23:20.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.173 "prchk_reftag": false, 00:23:20.173 "prchk_guard": false, 00:23:20.173 "hdgst": false, 00:23:20.173 "ddgst": false, 00:23:20.173 "psk": "key0", 00:23:20.173 "allow_unrecognized_csi": false, 00:23:20.173 "method": "bdev_nvme_attach_controller", 00:23:20.173 "req_id": 1 00:23:20.173 } 00:23:20.173 Got JSON-RPC error response 00:23:20.173 response: 00:23:20.173 { 00:23:20.173 "code": -126, 00:23:20.173 "message": "Required key not available" 00:23:20.173 } 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3550718 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3550718 ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3550718 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550718 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550718' 00:23:20.434 killing process with pid 3550718 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3550718 00:23:20.434 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.434 00:23:20.434 Latency(us) 00:23:20.434 [2024-10-12T20:11:38.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.434 [2024-10-12T20:11:38.923Z] =================================================================================================================== 00:23:20.434 [2024-10-12T20:11:38.923Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3550718 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3544614 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3544614 ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3544614 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3544614 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3544614' 00:23:20.434 killing process with pid 3544614 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3544614 00:23:20.434 22:11:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3544614 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.4wINg4DIn2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.4wINg4DIn2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3550860 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3550860 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3550860 ']' 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.696 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.696 [2024-10-12 22:11:39.130295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:20.696 [2024-10-12 22:11:39.130352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.957 [2024-10-12 22:11:39.215891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.957 [2024-10-12 22:11:39.244927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.957 [2024-10-12 22:11:39.244962] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.957 [2024-10-12 22:11:39.244968] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.957 [2024-10-12 22:11:39.244973] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.957 [2024-10-12 22:11:39.244977] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.957 [2024-10-12 22:11:39.244991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4wINg4DIn2 00:23:21.528 22:11:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.789 [2024-10-12 22:11:40.115560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.789 22:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:22.049 22:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:22.049 [2024-10-12 22:11:40.476426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:22.049 [2024-10-12 22:11:40.476603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.049 22:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.309 malloc0 00:23:22.309 22:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.570 22:11:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4wINg4DIn2 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4wINg4DIn2 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3551429 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3551429 /var/tmp/bdevperf.sock 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3551429 ']' 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.831 22:11:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.831 [2024-10-12 22:11:41.298493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:22.831 [2024-10-12 22:11:41.298545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551429 ] 00:23:23.092 [2024-10-12 22:11:41.374000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.092 [2024-10-12 22:11:41.401857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.664 22:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.664 22:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:23.664 22:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:23.925 22:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.186 [2024-10-12 22:11:42.422901] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.186 TLSTESTn1 00:23:24.186 22:11:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:24.186 Running I/O for 10 seconds... 00:23:26.513 5679.00 IOPS, 22.18 MiB/s [2024-10-12T20:11:45.943Z] 5753.50 IOPS, 22.47 MiB/s [2024-10-12T20:11:46.885Z] 5821.00 IOPS, 22.74 MiB/s [2024-10-12T20:11:47.828Z] 5746.00 IOPS, 22.45 MiB/s [2024-10-12T20:11:48.769Z] 5682.60 IOPS, 22.20 MiB/s [2024-10-12T20:11:49.712Z] 5616.00 IOPS, 21.94 MiB/s [2024-10-12T20:11:50.654Z] 5666.71 IOPS, 22.14 MiB/s [2024-10-12T20:11:52.039Z] 5702.00 IOPS, 22.27 MiB/s [2024-10-12T20:11:52.981Z] 5736.33 IOPS, 22.41 MiB/s [2024-10-12T20:11:52.981Z] 5633.50 IOPS, 22.01 MiB/s 00:23:34.492 Latency(us) 00:23:34.492 [2024-10-12T20:11:52.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.492 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:34.492 Verification LBA range: start 0x0 length 0x2000 00:23:34.492 TLSTESTn1 : 10.01 5638.51 22.03 0.00 0.00 22671.21 5789.01 60730.03 00:23:34.492 [2024-10-12T20:11:52.981Z] =================================================================================================================== 00:23:34.492 [2024-10-12T20:11:52.981Z] Total : 5638.51 22.03 0.00 0.00 22671.21 5789.01 60730.03 00:23:34.492 { 00:23:34.492 "results": [ 00:23:34.492 { 00:23:34.492 "job": "TLSTESTn1", 00:23:34.492 "core_mask": "0x4", 00:23:34.492 "workload": "verify", 00:23:34.492 "status": "finished", 00:23:34.492 "verify_range": { 00:23:34.492 "start": 0, 00:23:34.492 "length": 8192 00:23:34.492 }, 00:23:34.492 "queue_depth": 128, 00:23:34.492 "io_size": 4096, 00:23:34.492 "runtime": 10.013824, 00:23:34.492 "iops": 5638.505330231488, 00:23:34.492 "mibps": 22.02541144621675, 00:23:34.492 "io_failed": 0, 00:23:34.492 "io_timeout": 0, 00:23:34.492 "avg_latency_us": 22671.21334207062, 00:23:34.492 "min_latency_us": 5789.013333333333, 00:23:34.492 "max_latency_us": 60730.026666666665 00:23:34.492 } 00:23:34.492 ], 00:23:34.492 "core_count": 1 00:23:34.492 } 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3551429 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3551429 ']' 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3551429 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3551429 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3551429' 00:23:34.492 killing process with pid 3551429 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3551429 00:23:34.492 Received shutdown signal, test time was about 10.000000 seconds 00:23:34.492 00:23:34.492 Latency(us) 00:23:34.492 [2024-10-12T20:11:52.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.492 [2024-10-12T20:11:52.981Z] =================================================================================================================== 00:23:34.492 [2024-10-12T20:11:52.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3551429 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.4wINg4DIn2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4wINg4DIn2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4wINg4DIn2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4wINg4DIn2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4wINg4DIn2 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3553512 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3553512 /var/tmp/bdevperf.sock 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:34.492 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3553512 ']' 00:23:34.493 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:34.493 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.493 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:34.493 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.493 22:11:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.493 [2024-10-12 22:11:52.899983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:34.493 [2024-10-12 22:11:52.900040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3553512 ] 00:23:34.493 [2024-10-12 22:11:52.978216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.754 [2024-10-12 22:11:53.005629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.326 22:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.326 22:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:35.326 22:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:35.587 [2024-10-12 22:11:53.850225] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4wINg4DIn2': 0100666 00:23:35.587 [2024-10-12 22:11:53.850252] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:35.587 request: 00:23:35.587 { 00:23:35.587 "name": "key0", 00:23:35.587 "path": "/tmp/tmp.4wINg4DIn2", 00:23:35.587 "method": "keyring_file_add_key", 00:23:35.587 "req_id": 1 00:23:35.587 } 00:23:35.587 Got JSON-RPC error response 00:23:35.587 response: 00:23:35.587 { 00:23:35.587 "code": -1, 00:23:35.587 "message": "Operation not permitted" 00:23:35.587 } 00:23:35.587 22:11:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.587 [2024-10-12 22:11:54.030742] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.587 [2024-10-12 22:11:54.030764] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:35.587 request: 00:23:35.587 { 00:23:35.587 "name": "TLSTEST", 00:23:35.587 "trtype": "tcp", 00:23:35.587 "traddr": "10.0.0.2", 00:23:35.587 "adrfam": "ipv4", 00:23:35.587 "trsvcid": "4420", 00:23:35.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.587 "prchk_reftag": false, 00:23:35.587 "prchk_guard": false, 00:23:35.587 "hdgst": false, 00:23:35.587 "ddgst": false, 00:23:35.587 "psk": "key0", 00:23:35.587 "allow_unrecognized_csi": false, 00:23:35.587 "method": "bdev_nvme_attach_controller", 00:23:35.587 "req_id": 1 00:23:35.587 } 00:23:35.587 Got JSON-RPC error response 00:23:35.587 response: 00:23:35.587 { 00:23:35.587 "code": -126, 00:23:35.587 "message": "Required key not available" 00:23:35.587 } 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3553512 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3553512 ']' 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3553512 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.587 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3553512 00:23:35.848 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:35.848 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:35.848 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3553512' 00:23:35.848 killing process with pid 3553512 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3553512 00:23:35.849 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.849 00:23:35.849 Latency(us) 00:23:35.849 [2024-10-12T20:11:54.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.849 [2024-10-12T20:11:54.338Z] =================================================================================================================== 00:23:35.849 [2024-10-12T20:11:54.338Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3553512 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3550860 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3550860 ']' 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3550860 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550860 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550860' 00:23:35.849 killing process with pid 3550860 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3550860 00:23:35.849 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3550860 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3553808 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3553808 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3553808 ']' 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.109 22:11:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.109 [2024-10-12 22:11:54.467613] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:36.109 [2024-10-12 22:11:54.467666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.110 [2024-10-12 22:11:54.549337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.110 [2024-10-12 22:11:54.576937] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.110 [2024-10-12 22:11:54.576972] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.110 [2024-10-12 22:11:54.576977] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.110 [2024-10-12 22:11:54.576982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.110 [2024-10-12 22:11:54.576987] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.110 [2024-10-12 22:11:54.577006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4wINg4DIn2 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.051 [2024-10-12 22:11:55.462539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.051 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.312 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.312 [2024-10-12 22:11:55.783320] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.312 [2024-10-12 22:11:55.783499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.312 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.573 malloc0 00:23:37.573 22:11:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.833 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:37.833 [2024-10-12 22:11:56.299253] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4wINg4DIn2': 0100666 00:23:37.833 [2024-10-12 22:11:56.299278] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:37.833 request: 00:23:37.833 { 00:23:37.833 "name": "key0", 00:23:37.833 "path": "/tmp/tmp.4wINg4DIn2", 00:23:37.833 "method": "keyring_file_add_key", 00:23:37.833 "req_id": 1 00:23:37.833 } 00:23:37.833 Got JSON-RPC error response 00:23:37.833 response: 00:23:37.833 { 00:23:37.833 "code": -1, 00:23:37.833 "message": "Operation not permitted" 00:23:37.833 } 00:23:37.833 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.094 [2024-10-12 22:11:56.467683] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:38.094 [2024-10-12 22:11:56.467711] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:38.094 request: 00:23:38.094 { 00:23:38.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.094 "host": "nqn.2016-06.io.spdk:host1", 00:23:38.094 "psk": "key0", 00:23:38.094 "method": "nvmf_subsystem_add_host", 00:23:38.094 "req_id": 1 00:23:38.094 } 00:23:38.094 Got JSON-RPC error response 00:23:38.094 response: 00:23:38.094 { 00:23:38.094 "code": -32603, 00:23:38.094 "message": "Internal error" 00:23:38.094 } 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3553808 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3553808 ']' 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3553808 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3553808 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3553808' 00:23:38.094 killing process with pid 3553808 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3553808 00:23:38.094 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3553808 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.4wINg4DIn2 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3554394 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3554394 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3554394 ']' 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.355 22:11:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.355 [2024-10-12 22:11:56.734458] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:38.355 [2024-10-12 22:11:56.734517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.355 [2024-10-12 22:11:56.818770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.615 [2024-10-12 22:11:56.847756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.615 [2024-10-12 22:11:56.847785] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.615 [2024-10-12 22:11:56.847791] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.615 [2024-10-12 22:11:56.847796] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.615 [2024-10-12 22:11:56.847800] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.615 [2024-10-12 22:11:56.847815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4wINg4DIn2 00:23:39.185 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.444 [2024-10-12 22:11:57.725994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.444 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:39.704 22:11:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:39.704 [2024-10-12 22:11:58.094889] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.704 [2024-10-12 22:11:58.095068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.704 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:39.965 malloc0 00:23:39.965 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:40.226 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:40.226 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3554859 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3554859 /var/tmp/bdevperf.sock 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3554859 ']' 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.487 22:11:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.487 [2024-10-12 22:11:58.850523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:40.487 [2024-10-12 22:11:58.850564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554859 ] 00:23:40.487 [2024-10-12 22:11:58.922022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.487 [2024-10-12 22:11:58.952920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.747 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.747 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:40.747 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:40.747 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:41.008 [2024-10-12 22:11:59.336345] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.008 TLSTESTn1 00:23:41.008 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:41.270 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:41.270 "subsystems": [ 00:23:41.270 { 00:23:41.270 "subsystem": "keyring", 00:23:41.270 "config": [ 00:23:41.270 { 00:23:41.270 "method": "keyring_file_add_key", 00:23:41.270 "params": { 00:23:41.270 "name": "key0", 00:23:41.270 "path": "/tmp/tmp.4wINg4DIn2" 00:23:41.270 } 00:23:41.270 } 00:23:41.270 ] 00:23:41.270 }, 00:23:41.270 { 00:23:41.270 "subsystem": "iobuf", 00:23:41.270 "config": [ 00:23:41.270 { 00:23:41.270 "method": "iobuf_set_options", 00:23:41.270 "params": { 00:23:41.270 "small_pool_count": 8192, 00:23:41.270 "large_pool_count": 1024, 00:23:41.270 "small_bufsize": 8192, 00:23:41.270 "large_bufsize": 135168 00:23:41.270 } 00:23:41.270 } 00:23:41.270 ] 00:23:41.270 }, 00:23:41.270 { 00:23:41.270 "subsystem": "sock", 00:23:41.270 "config": [ 00:23:41.270 { 00:23:41.270 "method": "sock_set_default_impl", 00:23:41.270 "params": { 00:23:41.270 "impl_name": "posix" 00:23:41.270 } 00:23:41.270 }, 00:23:41.270 { 00:23:41.270 "method": "sock_impl_set_options", 00:23:41.270 "params": { 00:23:41.270 "impl_name": "ssl", 00:23:41.270 "recv_buf_size": 4096, 00:23:41.270 "send_buf_size": 4096, 00:23:41.270 "enable_recv_pipe": true, 00:23:41.270 "enable_quickack": false, 00:23:41.270 "enable_placement_id": 0, 00:23:41.270 "enable_zerocopy_send_server": true, 00:23:41.270 "enable_zerocopy_send_client": false, 00:23:41.270 "zerocopy_threshold": 0, 00:23:41.270 "tls_version": 0, 00:23:41.270 "enable_ktls": false 00:23:41.270 } 00:23:41.270 }, 00:23:41.270 { 00:23:41.270 "method": "sock_impl_set_options", 00:23:41.270 "params": { 00:23:41.270 "impl_name": "posix", 00:23:41.270 "recv_buf_size": 2097152, 00:23:41.271 "send_buf_size": 2097152, 00:23:41.271 "enable_recv_pipe": true, 00:23:41.271 "enable_quickack": false, 00:23:41.271 "enable_placement_id": 0, 00:23:41.271 "enable_zerocopy_send_server": true, 00:23:41.271 "enable_zerocopy_send_client": false, 00:23:41.271 "zerocopy_threshold": 0, 00:23:41.271 "tls_version": 0, 00:23:41.271 "enable_ktls": false 00:23:41.271 } 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "vmd", 00:23:41.271 "config": [] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "accel", 00:23:41.271 "config": [ 00:23:41.271 { 00:23:41.271 "method": "accel_set_options", 00:23:41.271 "params": { 00:23:41.271 "small_cache_size": 128, 00:23:41.271 "large_cache_size": 16, 00:23:41.271 "task_count": 2048, 00:23:41.271 "sequence_count": 2048, 00:23:41.271 "buf_count": 2048 00:23:41.271 } 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "bdev", 00:23:41.271 "config": [ 00:23:41.271 { 00:23:41.271 "method": "bdev_set_options", 00:23:41.271 "params": { 00:23:41.271 "bdev_io_pool_size": 65535, 00:23:41.271 "bdev_io_cache_size": 256, 00:23:41.271 "bdev_auto_examine": true, 00:23:41.271 "iobuf_small_cache_size": 128, 00:23:41.271 "iobuf_large_cache_size": 16 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_raid_set_options", 00:23:41.271 "params": { 00:23:41.271 "process_window_size_kb": 1024, 00:23:41.271 "process_max_bandwidth_mb_sec": 0 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_iscsi_set_options", 00:23:41.271 "params": { 00:23:41.271 "timeout_sec": 30 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_nvme_set_options", 00:23:41.271 "params": { 00:23:41.271 "action_on_timeout": "none", 00:23:41.271 "timeout_us": 0, 00:23:41.271 "timeout_admin_us": 0, 00:23:41.271 "keep_alive_timeout_ms": 10000, 00:23:41.271 "arbitration_burst": 0, 00:23:41.271 "low_priority_weight": 0, 00:23:41.271 "medium_priority_weight": 0, 00:23:41.271 "high_priority_weight": 0, 00:23:41.271 "nvme_adminq_poll_period_us": 10000, 00:23:41.271 "nvme_ioq_poll_period_us": 0, 00:23:41.271 "io_queue_requests": 0, 00:23:41.271 "delay_cmd_submit": true, 00:23:41.271 "transport_retry_count": 4, 00:23:41.271 "bdev_retry_count": 3, 00:23:41.271 "transport_ack_timeout": 0, 00:23:41.271 "ctrlr_loss_timeout_sec": 0, 00:23:41.271 "reconnect_delay_sec": 0, 00:23:41.271 "fast_io_fail_timeout_sec": 0, 00:23:41.271 "disable_auto_failback": false, 00:23:41.271 "generate_uuids": false, 00:23:41.271 "transport_tos": 0, 00:23:41.271 "nvme_error_stat": false, 00:23:41.271 "rdma_srq_size": 0, 00:23:41.271 "io_path_stat": false, 00:23:41.271 "allow_accel_sequence": false, 00:23:41.271 "rdma_max_cq_size": 0, 00:23:41.271 "rdma_cm_event_timeout_ms": 0, 00:23:41.271 "dhchap_digests": [ 00:23:41.271 "sha256", 00:23:41.271 "sha384", 00:23:41.271 "sha512" 00:23:41.271 ], 00:23:41.271 "dhchap_dhgroups": [ 00:23:41.271 "null", 00:23:41.271 "ffdhe2048", 00:23:41.271 "ffdhe3072", 00:23:41.271 "ffdhe4096", 00:23:41.271 "ffdhe6144", 00:23:41.271 "ffdhe8192" 00:23:41.271 ] 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_nvme_set_hotplug", 00:23:41.271 "params": { 00:23:41.271 "period_us": 100000, 00:23:41.271 "enable": false 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_malloc_create", 00:23:41.271 "params": { 00:23:41.271 "name": "malloc0", 00:23:41.271 "num_blocks": 8192, 00:23:41.271 "block_size": 4096, 00:23:41.271 "physical_block_size": 4096, 00:23:41.271 "uuid": "91a2fdfc-93aa-4f80-b61f-9a8acf588454", 00:23:41.271 "optimal_io_boundary": 0, 00:23:41.271 "md_size": 0, 00:23:41.271 "dif_type": 0, 00:23:41.271 "dif_is_head_of_md": false, 00:23:41.271 "dif_pi_format": 0 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "bdev_wait_for_examine" 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "nbd", 00:23:41.271 "config": [] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "scheduler", 00:23:41.271 "config": [ 00:23:41.271 { 00:23:41.271 "method": "framework_set_scheduler", 00:23:41.271 "params": { 00:23:41.271 "name": "static" 00:23:41.271 } 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "subsystem": "nvmf", 00:23:41.271 "config": [ 00:23:41.271 { 00:23:41.271 "method": "nvmf_set_config", 00:23:41.271 "params": { 00:23:41.271 "discovery_filter": "match_any", 00:23:41.271 "admin_cmd_passthru": { 00:23:41.271 "identify_ctrlr": false 00:23:41.271 }, 00:23:41.271 "dhchap_digests": [ 00:23:41.271 "sha256", 00:23:41.271 "sha384", 00:23:41.271 "sha512" 00:23:41.271 ], 00:23:41.271 "dhchap_dhgroups": [ 00:23:41.271 "null", 00:23:41.271 "ffdhe2048", 00:23:41.271 "ffdhe3072", 00:23:41.271 "ffdhe4096", 00:23:41.271 "ffdhe6144", 00:23:41.271 "ffdhe8192" 00:23:41.271 ] 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_set_max_subsystems", 00:23:41.271 "params": { 00:23:41.271 "max_subsystems": 1024 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_set_crdt", 00:23:41.271 "params": { 00:23:41.271 "crdt1": 0, 00:23:41.271 "crdt2": 0, 00:23:41.271 "crdt3": 0 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_create_transport", 00:23:41.271 "params": { 00:23:41.271 "trtype": "TCP", 00:23:41.271 "max_queue_depth": 128, 00:23:41.271 "max_io_qpairs_per_ctrlr": 127, 00:23:41.271 "in_capsule_data_size": 4096, 00:23:41.271 "max_io_size": 131072, 00:23:41.271 "io_unit_size": 131072, 00:23:41.271 "max_aq_depth": 128, 00:23:41.271 "num_shared_buffers": 511, 00:23:41.271 "buf_cache_size": 4294967295, 00:23:41.271 "dif_insert_or_strip": false, 00:23:41.271 "zcopy": false, 00:23:41.271 "c2h_success": false, 00:23:41.271 "sock_priority": 0, 00:23:41.271 "abort_timeout_sec": 1, 00:23:41.271 "ack_timeout": 0, 00:23:41.271 "data_wr_pool_size": 0 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_create_subsystem", 00:23:41.271 "params": { 00:23:41.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.271 "allow_any_host": false, 00:23:41.271 "serial_number": "SPDK00000000000001", 00:23:41.271 "model_number": "SPDK bdev Controller", 00:23:41.271 "max_namespaces": 10, 00:23:41.271 "min_cntlid": 1, 00:23:41.271 "max_cntlid": 65519, 00:23:41.271 "ana_reporting": false 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_subsystem_add_host", 00:23:41.271 "params": { 00:23:41.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.271 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.271 "psk": "key0" 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_subsystem_add_ns", 00:23:41.271 "params": { 00:23:41.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.271 "namespace": { 00:23:41.271 "nsid": 1, 00:23:41.271 "bdev_name": "malloc0", 00:23:41.271 "nguid": "91A2FDFC93AA4F80B61F9A8ACF588454", 00:23:41.271 "uuid": "91a2fdfc-93aa-4f80-b61f-9a8acf588454", 00:23:41.271 "no_auto_visible": false 00:23:41.271 } 00:23:41.271 } 00:23:41.271 }, 00:23:41.271 { 00:23:41.271 "method": "nvmf_subsystem_add_listener", 00:23:41.271 "params": { 00:23:41.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.271 "listen_address": { 00:23:41.271 "trtype": "TCP", 00:23:41.271 "adrfam": "IPv4", 00:23:41.271 "traddr": "10.0.0.2", 00:23:41.271 "trsvcid": "4420" 00:23:41.271 }, 00:23:41.271 "secure_channel": true 00:23:41.271 } 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 } 00:23:41.271 ] 00:23:41.271 }' 00:23:41.271 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:41.533 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:41.533 "subsystems": [ 00:23:41.533 { 00:23:41.533 "subsystem": "keyring", 00:23:41.533 "config": [ 00:23:41.533 { 00:23:41.533 "method": "keyring_file_add_key", 00:23:41.533 "params": { 00:23:41.533 "name": "key0", 00:23:41.533 "path": "/tmp/tmp.4wINg4DIn2" 00:23:41.533 } 00:23:41.533 } 00:23:41.533 ] 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "subsystem": "iobuf", 00:23:41.533 "config": [ 00:23:41.533 { 00:23:41.533 "method": "iobuf_set_options", 00:23:41.533 "params": { 00:23:41.533 "small_pool_count": 8192, 00:23:41.533 "large_pool_count": 1024, 00:23:41.533 "small_bufsize": 8192, 00:23:41.533 "large_bufsize": 135168 00:23:41.533 } 00:23:41.533 } 00:23:41.533 ] 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "subsystem": "sock", 00:23:41.533 "config": [ 00:23:41.533 { 00:23:41.533 "method": "sock_set_default_impl", 00:23:41.533 "params": { 00:23:41.533 "impl_name": "posix" 00:23:41.533 } 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "method": "sock_impl_set_options", 00:23:41.533 "params": { 00:23:41.533 "impl_name": "ssl", 00:23:41.533 "recv_buf_size": 4096, 00:23:41.533 "send_buf_size": 4096, 00:23:41.533 "enable_recv_pipe": true, 00:23:41.533 "enable_quickack": false, 00:23:41.533 "enable_placement_id": 0, 00:23:41.533 "enable_zerocopy_send_server": true, 00:23:41.533 "enable_zerocopy_send_client": false, 00:23:41.533 "zerocopy_threshold": 0, 00:23:41.533 "tls_version": 0, 00:23:41.533 "enable_ktls": false 00:23:41.533 } 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "method": "sock_impl_set_options", 00:23:41.533 "params": { 00:23:41.533 "impl_name": "posix", 00:23:41.533 "recv_buf_size": 2097152, 00:23:41.533 "send_buf_size": 2097152, 00:23:41.533 "enable_recv_pipe": true, 00:23:41.533 "enable_quickack": false, 00:23:41.533 "enable_placement_id": 0, 00:23:41.533 "enable_zerocopy_send_server": true, 00:23:41.533 "enable_zerocopy_send_client": false, 00:23:41.533 "zerocopy_threshold": 0, 00:23:41.533 "tls_version": 0, 00:23:41.533 "enable_ktls": false 00:23:41.533 } 00:23:41.533 } 00:23:41.533 ] 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "subsystem": "vmd", 00:23:41.533 "config": [] 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "subsystem": "accel", 00:23:41.533 "config": [ 00:23:41.533 { 00:23:41.533 "method": "accel_set_options", 00:23:41.533 "params": { 00:23:41.533 "small_cache_size": 128, 00:23:41.533 "large_cache_size": 16, 00:23:41.533 "task_count": 2048, 00:23:41.533 "sequence_count": 2048, 00:23:41.533 "buf_count": 2048 00:23:41.533 } 00:23:41.533 } 00:23:41.533 ] 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "subsystem": "bdev", 00:23:41.533 "config": [ 00:23:41.533 { 00:23:41.533 "method": "bdev_set_options", 00:23:41.533 "params": { 00:23:41.533 "bdev_io_pool_size": 65535, 00:23:41.533 "bdev_io_cache_size": 256, 00:23:41.533 "bdev_auto_examine": true, 00:23:41.533 "iobuf_small_cache_size": 128, 00:23:41.533 "iobuf_large_cache_size": 16 00:23:41.533 } 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "method": "bdev_raid_set_options", 00:23:41.533 "params": { 00:23:41.533 "process_window_size_kb": 1024, 00:23:41.533 "process_max_bandwidth_mb_sec": 0 00:23:41.533 } 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "method": "bdev_iscsi_set_options", 00:23:41.533 "params": { 00:23:41.533 "timeout_sec": 30 00:23:41.533 } 00:23:41.533 }, 00:23:41.533 { 00:23:41.533 "method": "bdev_nvme_set_options", 00:23:41.534 "params": { 00:23:41.534 "action_on_timeout": "none", 00:23:41.534 "timeout_us": 0, 00:23:41.534 "timeout_admin_us": 0, 00:23:41.534 "keep_alive_timeout_ms": 10000, 00:23:41.534 "arbitration_burst": 0, 00:23:41.534 "low_priority_weight": 0, 00:23:41.534 "medium_priority_weight": 0, 00:23:41.534 "high_priority_weight": 0, 00:23:41.534 "nvme_adminq_poll_period_us": 10000, 00:23:41.534 "nvme_ioq_poll_period_us": 0, 00:23:41.534 "io_queue_requests": 512, 00:23:41.534 "delay_cmd_submit": true, 00:23:41.534 "transport_retry_count": 4, 00:23:41.534 "bdev_retry_count": 3, 00:23:41.534 "transport_ack_timeout": 0, 00:23:41.534 "ctrlr_loss_timeout_sec": 0, 00:23:41.534 "reconnect_delay_sec": 0, 00:23:41.534 "fast_io_fail_timeout_sec": 0, 00:23:41.534 "disable_auto_failback": false, 00:23:41.534 "generate_uuids": false, 00:23:41.534 "transport_tos": 0, 00:23:41.534 "nvme_error_stat": false, 00:23:41.534 "rdma_srq_size": 0, 00:23:41.534 "io_path_stat": false, 00:23:41.534 "allow_accel_sequence": false, 00:23:41.534 "rdma_max_cq_size": 0, 00:23:41.534 "rdma_cm_event_timeout_ms": 0, 00:23:41.534 "dhchap_digests": [ 00:23:41.534 "sha256", 00:23:41.534 "sha384", 00:23:41.534 "sha512" 00:23:41.534 ], 00:23:41.534 "dhchap_dhgroups": [ 00:23:41.534 "null", 00:23:41.534 "ffdhe2048", 00:23:41.534 "ffdhe3072", 00:23:41.534 "ffdhe4096", 00:23:41.534 "ffdhe6144", 00:23:41.534 "ffdhe8192" 00:23:41.534 ] 00:23:41.534 } 00:23:41.534 }, 00:23:41.534 { 00:23:41.534 "method": "bdev_nvme_attach_controller", 00:23:41.534 "params": { 00:23:41.534 "name": "TLSTEST", 00:23:41.534 "trtype": "TCP", 00:23:41.534 "adrfam": "IPv4", 00:23:41.534 "traddr": "10.0.0.2", 00:23:41.534 "trsvcid": "4420", 00:23:41.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.534 "prchk_reftag": false, 00:23:41.534 "prchk_guard": false, 00:23:41.534 "ctrlr_loss_timeout_sec": 0, 00:23:41.534 "reconnect_delay_sec": 0, 00:23:41.534 "fast_io_fail_timeout_sec": 0, 00:23:41.534 "psk": "key0", 00:23:41.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.534 "hdgst": false, 00:23:41.534 "ddgst": false 00:23:41.534 } 00:23:41.534 }, 00:23:41.534 { 00:23:41.534 "method": "bdev_nvme_set_hotplug", 00:23:41.534 "params": { 00:23:41.534 "period_us": 100000, 00:23:41.534 "enable": false 00:23:41.534 } 00:23:41.534 }, 00:23:41.534 { 00:23:41.534 "method": "bdev_wait_for_examine" 00:23:41.534 } 00:23:41.534 ] 00:23:41.534 }, 00:23:41.534 { 00:23:41.534 "subsystem": "nbd", 00:23:41.534 "config": [] 00:23:41.534 } 00:23:41.534 ] 00:23:41.534 }' 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3554859 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3554859 ']' 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3554859 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.534 22:11:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3554859 00:23:41.534 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:41.534 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:41.534 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3554859' 00:23:41.534 killing process with pid 3554859 00:23:41.534 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3554859 00:23:41.534 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.534 00:23:41.534 Latency(us) 00:23:41.534 [2024-10-12T20:12:00.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.534 [2024-10-12T20:12:00.023Z] =================================================================================================================== 00:23:41.534 [2024-10-12T20:12:00.023Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.534 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3554859 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3554394 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3554394 ']' 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3554394 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3554394 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3554394' 00:23:41.795 killing process with pid 3554394 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3554394 00:23:41.795 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3554394 00:23:42.056 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:42.056 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:42.056 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.056 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.056 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:42.056 "subsystems": [ 00:23:42.056 { 00:23:42.056 "subsystem": "keyring", 00:23:42.056 "config": [ 00:23:42.056 { 00:23:42.056 "method": "keyring_file_add_key", 00:23:42.056 "params": { 00:23:42.056 "name": "key0", 00:23:42.056 "path": "/tmp/tmp.4wINg4DIn2" 00:23:42.056 } 00:23:42.056 } 00:23:42.056 ] 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "subsystem": "iobuf", 00:23:42.056 "config": [ 00:23:42.056 { 00:23:42.056 "method": "iobuf_set_options", 00:23:42.056 "params": { 00:23:42.056 "small_pool_count": 8192, 00:23:42.056 "large_pool_count": 1024, 00:23:42.056 "small_bufsize": 8192, 00:23:42.056 "large_bufsize": 135168 00:23:42.056 } 00:23:42.056 } 00:23:42.056 ] 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "subsystem": "sock", 00:23:42.056 "config": [ 00:23:42.056 { 00:23:42.056 "method": "sock_set_default_impl", 00:23:42.056 "params": { 00:23:42.056 "impl_name": "posix" 00:23:42.056 } 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "method": "sock_impl_set_options", 00:23:42.056 "params": { 00:23:42.056 "impl_name": "ssl", 00:23:42.056 "recv_buf_size": 4096, 00:23:42.056 "send_buf_size": 4096, 00:23:42.056 "enable_recv_pipe": true, 00:23:42.056 "enable_quickack": false, 00:23:42.056 "enable_placement_id": 0, 00:23:42.056 "enable_zerocopy_send_server": true, 00:23:42.056 "enable_zerocopy_send_client": false, 00:23:42.056 "zerocopy_threshold": 0, 00:23:42.056 "tls_version": 0, 00:23:42.056 "enable_ktls": false 00:23:42.056 } 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "method": "sock_impl_set_options", 00:23:42.056 "params": { 00:23:42.056 "impl_name": "posix", 00:23:42.056 "recv_buf_size": 2097152, 00:23:42.056 "send_buf_size": 2097152, 00:23:42.056 "enable_recv_pipe": true, 00:23:42.056 "enable_quickack": false, 00:23:42.056 "enable_placement_id": 0, 00:23:42.056 "enable_zerocopy_send_server": true, 00:23:42.056 "enable_zerocopy_send_client": false, 00:23:42.056 "zerocopy_threshold": 0, 00:23:42.056 "tls_version": 0, 00:23:42.056 "enable_ktls": false 00:23:42.056 } 00:23:42.056 } 00:23:42.056 ] 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "subsystem": "vmd", 00:23:42.056 "config": [] 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "subsystem": "accel", 00:23:42.056 "config": [ 00:23:42.056 { 00:23:42.056 "method": "accel_set_options", 00:23:42.056 "params": { 00:23:42.056 "small_cache_size": 128, 00:23:42.056 "large_cache_size": 16, 00:23:42.056 "task_count": 2048, 00:23:42.056 "sequence_count": 2048, 00:23:42.056 "buf_count": 2048 00:23:42.056 } 00:23:42.056 } 00:23:42.056 ] 00:23:42.056 }, 00:23:42.056 { 00:23:42.056 "subsystem": "bdev", 00:23:42.056 "config": [ 00:23:42.056 { 00:23:42.056 "method": "bdev_set_options", 00:23:42.057 "params": { 00:23:42.057 "bdev_io_pool_size": 65535, 00:23:42.057 "bdev_io_cache_size": 256, 00:23:42.057 "bdev_auto_examine": true, 00:23:42.057 "iobuf_small_cache_size": 128, 00:23:42.057 "iobuf_large_cache_size": 16 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_raid_set_options", 00:23:42.057 "params": { 00:23:42.057 "process_window_size_kb": 1024, 00:23:42.057 "process_max_bandwidth_mb_sec": 0 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_iscsi_set_options", 00:23:42.057 "params": { 00:23:42.057 "timeout_sec": 30 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_nvme_set_options", 00:23:42.057 "params": { 00:23:42.057 "action_on_timeout": "none", 00:23:42.057 "timeout_us": 0, 00:23:42.057 "timeout_admin_us": 0, 00:23:42.057 "keep_alive_timeout_ms": 10000, 00:23:42.057 "arbitration_burst": 0, 00:23:42.057 "low_priority_weight": 0, 00:23:42.057 "medium_priority_weight": 0, 00:23:42.057 "high_priority_weight": 0, 00:23:42.057 "nvme_adminq_poll_period_us": 10000, 00:23:42.057 "nvme_ioq_poll_period_us": 0, 00:23:42.057 "io_queue_requests": 0, 00:23:42.057 "delay_cmd_submit": true, 00:23:42.057 "transport_retry_count": 4, 00:23:42.057 "bdev_retry_count": 3, 00:23:42.057 "transport_ack_timeout": 0, 00:23:42.057 "ctrlr_loss_timeout_sec": 0, 00:23:42.057 "reconnect_delay_sec": 0, 00:23:42.057 "fast_io_fail_timeout_sec": 0, 00:23:42.057 "disable_auto_failback": false, 00:23:42.057 "generate_uuids": false, 00:23:42.057 "transport_tos": 0, 00:23:42.057 "nvme_error_stat": false, 00:23:42.057 "rdma_srq_size": 0, 00:23:42.057 "io_path_stat": false, 00:23:42.057 "allow_accel_sequence": false, 00:23:42.057 "rdma_max_cq_size": 0, 00:23:42.057 "rdma_cm_event_timeout_ms": 0, 00:23:42.057 "dhchap_digests": [ 00:23:42.057 "sha256", 00:23:42.057 "sha384", 00:23:42.057 "sha512" 00:23:42.057 ], 00:23:42.057 "dhchap_dhgroups": [ 00:23:42.057 "null", 00:23:42.057 "ffdhe2048", 00:23:42.057 "ffdhe3072", 00:23:42.057 "ffdhe4096", 00:23:42.057 "ffdhe6144", 00:23:42.057 "ffdhe8192" 00:23:42.057 ] 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_nvme_set_hotplug", 00:23:42.057 "params": { 00:23:42.057 "period_us": 100000, 00:23:42.057 "enable": false 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_malloc_create", 00:23:42.057 "params": { 00:23:42.057 "name": "malloc0", 00:23:42.057 "num_blocks": 8192, 00:23:42.057 "block_size": 4096, 00:23:42.057 "physical_block_size": 4096, 00:23:42.057 "uuid": "91a2fdfc-93aa-4f80-b61f-9a8acf588454", 00:23:42.057 "optimal_io_boundary": 0, 00:23:42.057 "md_size": 0, 00:23:42.057 "dif_type": 0, 00:23:42.057 "dif_is_head_of_md": false, 00:23:42.057 "dif_pi_format": 0 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "bdev_wait_for_examine" 00:23:42.057 } 00:23:42.057 ] 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "subsystem": "nbd", 00:23:42.057 "config": [] 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "subsystem": "scheduler", 00:23:42.057 "config": [ 00:23:42.057 { 00:23:42.057 "method": "framework_set_scheduler", 00:23:42.057 "params": { 00:23:42.057 "name": "static" 00:23:42.057 } 00:23:42.057 } 00:23:42.057 ] 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "subsystem": "nvmf", 00:23:42.057 "config": [ 00:23:42.057 { 00:23:42.057 "method": "nvmf_set_config", 00:23:42.057 "params": { 00:23:42.057 "discovery_filter": "match_any", 00:23:42.057 "admin_cmd_passthru": { 00:23:42.057 "identify_ctrlr": false 00:23:42.057 }, 00:23:42.057 "dhchap_digests": [ 00:23:42.057 "sha256", 00:23:42.057 "sha384", 00:23:42.057 "sha512" 00:23:42.057 ], 00:23:42.057 "dhchap_dhgroups": [ 00:23:42.057 "null", 00:23:42.057 "ffdhe2048", 00:23:42.057 "ffdhe3072", 00:23:42.057 "ffdhe4096", 00:23:42.057 "ffdhe6144", 00:23:42.057 "ffdhe8192" 00:23:42.057 ] 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_set_max_subsystems", 00:23:42.057 "params": { 00:23:42.057 "max_subsystems": 1024 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_set_crdt", 00:23:42.057 "params": { 00:23:42.057 "crdt1": 0, 00:23:42.057 "crdt2": 0, 00:23:42.057 "crdt3": 0 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_create_transport", 00:23:42.057 "params": { 00:23:42.057 "trtype": "TCP", 00:23:42.057 "max_queue_depth": 128, 00:23:42.057 "max_io_qpairs_per_ctrlr": 127, 00:23:42.057 "in_capsule_data_size": 4096, 00:23:42.057 "max_io_size": 131072, 00:23:42.057 "io_unit_size": 131072, 00:23:42.057 "max_aq_depth": 128, 00:23:42.057 "num_shared_buffers": 511, 00:23:42.057 "buf_cache_size": 4294967295, 00:23:42.057 "dif_insert_or_strip": false, 00:23:42.057 "zcopy": false, 00:23:42.057 "c2h_success": false, 00:23:42.057 "sock_priority": 0, 00:23:42.057 "abort_timeout_sec": 1, 00:23:42.057 "ack_timeout": 0, 00:23:42.057 "data_wr_pool_size": 0 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_create_subsystem", 00:23:42.057 "params": { 00:23:42.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.057 "allow_any_host": false, 00:23:42.057 "serial_number": "SPDK00000000000001", 00:23:42.057 "model_number": "SPDK bdev Controller", 00:23:42.057 "max_namespaces": 10, 00:23:42.057 "min_cntlid": 1, 00:23:42.057 "max_cntlid": 65519, 00:23:42.057 "ana_reporting": false 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_subsystem_add_host", 00:23:42.057 "params": { 00:23:42.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.057 "host": "nqn.2016-06.io.spdk:host1", 00:23:42.057 "psk": "key0" 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_subsystem_add_ns", 00:23:42.057 "params": { 00:23:42.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.057 "namespace": { 00:23:42.057 "nsid": 1, 00:23:42.057 "bdev_name": "malloc0", 00:23:42.057 "nguid": "91A2FDFC93AA4F80B61F9A8ACF588454", 00:23:42.057 "uuid": "91a2fdfc-93aa-4f80-b61f-9a8acf588454", 00:23:42.057 "no_auto_visible": false 00:23:42.057 } 00:23:42.057 } 00:23:42.057 }, 00:23:42.057 { 00:23:42.057 "method": "nvmf_subsystem_add_listener", 00:23:42.057 "params": { 00:23:42.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.057 "listen_address": { 00:23:42.057 "trtype": "TCP", 00:23:42.057 "adrfam": "IPv4", 00:23:42.057 "traddr": "10.0.0.2", 00:23:42.057 "trsvcid": "4420" 00:23:42.057 }, 00:23:42.057 "secure_channel": true 00:23:42.057 } 00:23:42.057 } 00:23:42.057 ] 00:23:42.057 } 00:23:42.057 ] 00:23:42.057 }' 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3555225 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3555225 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3555225 ']' 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.057 22:12:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.057 [2024-10-12 22:12:00.376784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:42.057 [2024-10-12 22:12:00.376840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.057 [2024-10-12 22:12:00.458427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.057 [2024-10-12 22:12:00.486735] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.057 [2024-10-12 22:12:00.486771] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.057 [2024-10-12 22:12:00.486777] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.057 [2024-10-12 22:12:00.486782] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.057 [2024-10-12 22:12:00.486787] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.057 [2024-10-12 22:12:00.486832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.318 [2024-10-12 22:12:00.682382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.318 [2024-10-12 22:12:00.714403] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.318 [2024-10-12 22:12:00.714581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.889 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3555310 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3555310 /var/tmp/bdevperf.sock 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3555310 ']' 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.890 22:12:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:42.890 "subsystems": [ 00:23:42.890 { 00:23:42.890 "subsystem": "keyring", 00:23:42.890 "config": [ 00:23:42.890 { 00:23:42.890 "method": "keyring_file_add_key", 00:23:42.890 "params": { 00:23:42.890 "name": "key0", 00:23:42.890 "path": "/tmp/tmp.4wINg4DIn2" 00:23:42.890 } 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "iobuf", 00:23:42.890 "config": [ 00:23:42.890 { 00:23:42.890 "method": "iobuf_set_options", 00:23:42.890 "params": { 00:23:42.890 "small_pool_count": 8192, 00:23:42.890 "large_pool_count": 1024, 00:23:42.890 "small_bufsize": 8192, 00:23:42.890 "large_bufsize": 135168 00:23:42.890 } 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "sock", 00:23:42.890 "config": [ 00:23:42.890 { 00:23:42.890 "method": "sock_set_default_impl", 00:23:42.890 "params": { 00:23:42.890 "impl_name": "posix" 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "sock_impl_set_options", 00:23:42.890 "params": { 00:23:42.890 "impl_name": "ssl", 00:23:42.890 "recv_buf_size": 4096, 00:23:42.890 "send_buf_size": 4096, 00:23:42.890 "enable_recv_pipe": true, 00:23:42.890 "enable_quickack": false, 00:23:42.890 "enable_placement_id": 0, 00:23:42.890 "enable_zerocopy_send_server": true, 00:23:42.890 "enable_zerocopy_send_client": false, 00:23:42.890 "zerocopy_threshold": 0, 00:23:42.890 "tls_version": 0, 00:23:42.890 "enable_ktls": false 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "sock_impl_set_options", 00:23:42.890 "params": { 00:23:42.890 "impl_name": "posix", 00:23:42.890 "recv_buf_size": 2097152, 00:23:42.890 "send_buf_size": 2097152, 00:23:42.890 "enable_recv_pipe": true, 00:23:42.890 "enable_quickack": false, 00:23:42.890 "enable_placement_id": 0, 00:23:42.890 "enable_zerocopy_send_server": true, 00:23:42.890 "enable_zerocopy_send_client": false, 00:23:42.890 "zerocopy_threshold": 0, 00:23:42.890 "tls_version": 0, 00:23:42.890 "enable_ktls": false 00:23:42.890 } 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "vmd", 00:23:42.890 "config": [] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "accel", 00:23:42.890 "config": [ 00:23:42.890 { 00:23:42.890 "method": "accel_set_options", 00:23:42.890 "params": { 00:23:42.890 "small_cache_size": 128, 00:23:42.890 "large_cache_size": 16, 00:23:42.890 "task_count": 2048, 00:23:42.890 "sequence_count": 2048, 00:23:42.890 "buf_count": 2048 00:23:42.890 } 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "bdev", 00:23:42.890 "config": [ 00:23:42.890 { 00:23:42.890 "method": "bdev_set_options", 00:23:42.890 "params": { 00:23:42.890 "bdev_io_pool_size": 65535, 00:23:42.890 "bdev_io_cache_size": 256, 00:23:42.890 "bdev_auto_examine": true, 00:23:42.890 "iobuf_small_cache_size": 128, 00:23:42.890 "iobuf_large_cache_size": 16 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_raid_set_options", 00:23:42.890 "params": { 00:23:42.890 "process_window_size_kb": 1024, 00:23:42.890 "process_max_bandwidth_mb_sec": 0 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_iscsi_set_options", 00:23:42.890 "params": { 00:23:42.890 "timeout_sec": 30 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_nvme_set_options", 00:23:42.890 "params": { 00:23:42.890 "action_on_timeout": "none", 00:23:42.890 "timeout_us": 0, 00:23:42.890 "timeout_admin_us": 0, 00:23:42.890 "keep_alive_timeout_ms": 10000, 00:23:42.890 "arbitration_burst": 0, 00:23:42.890 "low_priority_weight": 0, 00:23:42.890 "medium_priority_weight": 0, 00:23:42.890 "high_priority_weight": 0, 00:23:42.890 "nvme_adminq_poll_period_us": 10000, 00:23:42.890 "nvme_ioq_poll_period_us": 0, 00:23:42.890 "io_queue_requests": 512, 00:23:42.890 "delay_cmd_submit": true, 00:23:42.890 "transport_retry_count": 4, 00:23:42.890 "bdev_retry_count": 3, 00:23:42.890 "transport_ack_timeout": 0, 00:23:42.890 "ctrlr_loss_timeout_sec": 0, 00:23:42.890 "reconnect_delay_sec": 0, 00:23:42.890 "fast_io_fail_timeout_sec": 0, 00:23:42.890 "disable_auto_failback": false, 00:23:42.890 "generate_uuids": false, 00:23:42.890 "transport_tos": 0, 00:23:42.890 "nvme_error_stat": false, 00:23:42.890 "rdma_srq_size": 0, 00:23:42.890 "io_path_stat": false, 00:23:42.890 "allow_accel_sequence": false, 00:23:42.890 "rdma_max_cq_size": 0, 00:23:42.890 "rdma_cm_event_timeout_ms": 0, 00:23:42.890 "dhchap_digests": [ 00:23:42.890 "sha256", 00:23:42.890 "sha384", 00:23:42.890 "sha512" 00:23:42.890 ], 00:23:42.890 "dhchap_dhgroups": [ 00:23:42.890 "null", 00:23:42.890 "ffdhe2048", 00:23:42.890 "ffdhe3072", 00:23:42.890 "ffdhe4096", 00:23:42.890 "ffdhe6144", 00:23:42.890 "ffdhe8192" 00:23:42.890 ] 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_nvme_attach_controller", 00:23:42.890 "params": { 00:23:42.890 "name": "TLSTEST", 00:23:42.890 "trtype": "TCP", 00:23:42.890 "adrfam": "IPv4", 00:23:42.890 "traddr": "10.0.0.2", 00:23:42.890 "trsvcid": "4420", 00:23:42.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.890 "prchk_reftag": false, 00:23:42.890 "prchk_guard": false, 00:23:42.890 "ctrlr_loss_timeout_sec": 0, 00:23:42.890 "reconnect_delay_sec": 0, 00:23:42.890 "fast_io_fail_timeout_sec": 0, 00:23:42.890 "psk": "key0", 00:23:42.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.890 "hdgst": false, 00:23:42.890 "ddgst": false 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_nvme_set_hotplug", 00:23:42.890 "params": { 00:23:42.890 "period_us": 100000, 00:23:42.890 "enable": false 00:23:42.890 } 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "method": "bdev_wait_for_examine" 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }, 00:23:42.890 { 00:23:42.890 "subsystem": "nbd", 00:23:42.890 "config": [] 00:23:42.890 } 00:23:42.890 ] 00:23:42.890 }' 00:23:42.890 [2024-10-12 22:12:01.253717] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:42.890 [2024-10-12 22:12:01.253774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555310 ] 00:23:42.890 [2024-10-12 22:12:01.331305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.890 [2024-10-12 22:12:01.362011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.151 [2024-10-12 22:12:01.495283] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.723 22:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.723 22:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:43.723 22:12:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:43.723 Running I/O for 10 seconds... 00:23:46.048 4595.00 IOPS, 17.95 MiB/s [2024-10-12T20:12:05.479Z] 4203.00 IOPS, 16.42 MiB/s [2024-10-12T20:12:06.509Z] 4458.00 IOPS, 17.41 MiB/s [2024-10-12T20:12:07.495Z] 4818.50 IOPS, 18.82 MiB/s [2024-10-12T20:12:08.435Z] 4913.60 IOPS, 19.19 MiB/s [2024-10-12T20:12:09.377Z] 5092.00 IOPS, 19.89 MiB/s [2024-10-12T20:12:10.320Z] 5183.00 IOPS, 20.25 MiB/s [2024-10-12T20:12:11.263Z] 5276.25 IOPS, 20.61 MiB/s [2024-10-12T20:12:12.206Z] 5249.78 IOPS, 20.51 MiB/s [2024-10-12T20:12:12.467Z] 5265.70 IOPS, 20.57 MiB/s 00:23:53.978 Latency(us) 00:23:53.978 [2024-10-12T20:12:12.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.978 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:53.978 Verification LBA range: start 0x0 length 0x2000 00:23:53.978 TLSTESTn1 : 10.06 5246.20 20.49 0.00 0.00 24332.88 5679.79 77332.48 00:23:53.978 [2024-10-12T20:12:12.467Z] =================================================================================================================== 00:23:53.978 [2024-10-12T20:12:12.467Z] Total : 5246.20 20.49 0.00 0.00 24332.88 5679.79 77332.48 00:23:53.978 { 00:23:53.978 "results": [ 00:23:53.978 { 00:23:53.978 "job": "TLSTESTn1", 00:23:53.978 "core_mask": "0x4", 00:23:53.978 "workload": "verify", 00:23:53.978 "status": "finished", 00:23:53.978 "verify_range": { 00:23:53.978 "start": 0, 00:23:53.978 "length": 8192 00:23:53.978 }, 00:23:53.978 "queue_depth": 128, 00:23:53.978 "io_size": 4096, 00:23:53.978 "runtime": 10.061386, 00:23:53.978 "iops": 5246.195703057213, 00:23:53.978 "mibps": 20.49295196506724, 00:23:53.978 "io_failed": 0, 00:23:53.978 "io_timeout": 0, 00:23:53.978 "avg_latency_us": 24332.88432454279, 00:23:53.978 "min_latency_us": 5679.786666666667, 00:23:53.978 "max_latency_us": 77332.48 00:23:53.978 } 00:23:53.978 ], 00:23:53.978 "core_count": 1 00:23:53.978 } 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3555310 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3555310 ']' 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3555310 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3555310 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3555310' 00:23:53.978 killing process with pid 3555310 00:23:53.978 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3555310 00:23:53.979 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.979 00:23:53.979 Latency(us) 00:23:53.979 [2024-10-12T20:12:12.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.979 [2024-10-12T20:12:12.468Z] =================================================================================================================== 00:23:53.979 [2024-10-12T20:12:12.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3555310 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3555225 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3555225 ']' 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3555225 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.979 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3555225 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3555225' 00:23:54.240 killing process with pid 3555225 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3555225 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3555225 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3558150 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3558150 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3558150 ']' 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.240 22:12:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.240 [2024-10-12 22:12:12.660777] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:54.240 [2024-10-12 22:12:12.660832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.502 [2024-10-12 22:12:12.745593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.502 [2024-10-12 22:12:12.781744] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.502 [2024-10-12 22:12:12.781799] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.502 [2024-10-12 22:12:12.781807] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.502 [2024-10-12 22:12:12.781814] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.502 [2024-10-12 22:12:12.781820] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.502 [2024-10-12 22:12:12.781843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.4wINg4DIn2 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4wINg4DIn2 00:23:55.075 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.337 [2024-10-12 22:12:13.683354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.337 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.599 22:12:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.599 [2024-10-12 22:12:14.044246] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.599 [2024-10-12 22:12:14.044574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.599 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.860 malloc0 00:23:55.860 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.121 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3558517 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3558517 /var/tmp/bdevperf.sock 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3558517 ']' 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.384 22:12:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 [2024-10-12 22:12:14.853015] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:56.384 [2024-10-12 22:12:14.853082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558517 ] 00:23:56.645 [2024-10-12 22:12:14.934255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.645 [2024-10-12 22:12:14.966843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.219 22:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.219 22:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:57.219 22:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:23:57.480 22:12:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:57.480 [2024-10-12 22:12:15.959020] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.741 nvme0n1 00:23:57.741 22:12:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.741 Running I/O for 1 seconds... 00:23:58.682 5216.00 IOPS, 20.38 MiB/s 00:23:58.682 Latency(us) 00:23:58.682 [2024-10-12T20:12:17.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.682 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.682 Verification LBA range: start 0x0 length 0x2000 00:23:58.682 nvme0n1 : 1.02 5250.96 20.51 0.00 0.00 24211.05 4532.91 68157.44 00:23:58.682 [2024-10-12T20:12:17.171Z] =================================================================================================================== 00:23:58.682 [2024-10-12T20:12:17.171Z] Total : 5250.96 20.51 0.00 0.00 24211.05 4532.91 68157.44 00:23:58.682 { 00:23:58.682 "results": [ 00:23:58.682 { 00:23:58.682 "job": "nvme0n1", 00:23:58.682 "core_mask": "0x2", 00:23:58.682 "workload": "verify", 00:23:58.682 "status": "finished", 00:23:58.682 "verify_range": { 00:23:58.682 "start": 0, 00:23:58.682 "length": 8192 00:23:58.682 }, 00:23:58.682 "queue_depth": 128, 00:23:58.682 "io_size": 4096, 00:23:58.682 "runtime": 1.01791, 00:23:58.682 "iops": 5250.955388983309, 00:23:58.682 "mibps": 20.51154448821605, 00:23:58.682 "io_failed": 0, 00:23:58.682 "io_timeout": 0, 00:23:58.682 "avg_latency_us": 24211.04857125039, 00:23:58.682 "min_latency_us": 4532.906666666667, 00:23:58.682 "max_latency_us": 68157.44 00:23:58.682 } 00:23:58.682 ], 00:23:58.682 "core_count": 1 00:23:58.682 } 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3558517 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3558517 ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3558517 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3558517 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3558517' 00:23:58.943 killing process with pid 3558517 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3558517 00:23:58.943 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.943 00:23:58.943 Latency(us) 00:23:58.943 [2024-10-12T20:12:17.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.943 [2024-10-12T20:12:17.432Z] =================================================================================================================== 00:23:58.943 [2024-10-12T20:12:17.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3558517 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3558150 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3558150 ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3558150 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3558150 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3558150' 00:23:58.943 killing process with pid 3558150 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3558150 00:23:58.943 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3558150 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3559150 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3559150 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3559150 ']' 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.204 22:12:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.204 [2024-10-12 22:12:17.627246] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:59.204 [2024-10-12 22:12:17.627307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.466 [2024-10-12 22:12:17.712494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.466 [2024-10-12 22:12:17.757813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.466 [2024-10-12 22:12:17.757864] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.466 [2024-10-12 22:12:17.757873] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.466 [2024-10-12 22:12:17.757879] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.466 [2024-10-12 22:12:17.757885] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.466 [2024-10-12 22:12:17.757907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.039 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.039 [2024-10-12 22:12:18.472668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.039 malloc0 00:24:00.039 [2024-10-12 22:12:18.513724] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.039 [2024-10-12 22:12:18.514025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3559230 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3559230 /var/tmp/bdevperf.sock 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3559230 ']' 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.301 [2024-10-12 22:12:18.593191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:00.301 [2024-10-12 22:12:18.593251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559230 ] 00:24:00.301 [2024-10-12 22:12:18.673866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.301 [2024-10-12 22:12:18.706775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:00.301 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4wINg4DIn2 00:24:00.563 22:12:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:00.824 [2024-10-12 22:12:19.101946] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.824 nvme0n1 00:24:00.824 22:12:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.824 Running I/O for 1 seconds... 00:24:02.212 5462.00 IOPS, 21.34 MiB/s 00:24:02.212 Latency(us) 00:24:02.212 [2024-10-12T20:12:20.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:02.212 Verification LBA range: start 0x0 length 0x2000 00:24:02.212 nvme0n1 : 1.01 5517.70 21.55 0.00 0.00 23058.00 5379.41 40632.32 00:24:02.212 [2024-10-12T20:12:20.701Z] =================================================================================================================== 00:24:02.212 [2024-10-12T20:12:20.701Z] Total : 5517.70 21.55 0.00 0.00 23058.00 5379.41 40632.32 00:24:02.212 { 00:24:02.212 "results": [ 00:24:02.212 { 00:24:02.212 "job": "nvme0n1", 00:24:02.212 "core_mask": "0x2", 00:24:02.212 "workload": "verify", 00:24:02.212 "status": "finished", 00:24:02.212 "verify_range": { 00:24:02.212 "start": 0, 00:24:02.212 "length": 8192 00:24:02.212 }, 00:24:02.212 "queue_depth": 128, 00:24:02.212 "io_size": 4096, 00:24:02.212 "runtime": 1.013285, 00:24:02.212 "iops": 5517.697390171571, 00:24:02.212 "mibps": 21.5535054303577, 00:24:02.212 "io_failed": 0, 00:24:02.212 "io_timeout": 0, 00:24:02.212 "avg_latency_us": 23057.998907768437, 00:24:02.212 "min_latency_us": 5379.413333333333, 00:24:02.212 "max_latency_us": 40632.32 00:24:02.212 } 00:24:02.212 ], 00:24:02.212 "core_count": 1 00:24:02.212 } 00:24:02.212 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:02.212 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.212 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.212 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.212 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:02.212 "subsystems": [ 00:24:02.212 { 00:24:02.212 "subsystem": "keyring", 00:24:02.212 "config": [ 00:24:02.212 { 00:24:02.212 "method": "keyring_file_add_key", 00:24:02.212 "params": { 00:24:02.212 "name": "key0", 00:24:02.212 "path": "/tmp/tmp.4wINg4DIn2" 00:24:02.212 } 00:24:02.212 } 00:24:02.212 ] 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "subsystem": "iobuf", 00:24:02.212 "config": [ 00:24:02.212 { 00:24:02.212 "method": "iobuf_set_options", 00:24:02.212 "params": { 00:24:02.212 "small_pool_count": 8192, 00:24:02.212 "large_pool_count": 1024, 00:24:02.212 "small_bufsize": 8192, 00:24:02.212 "large_bufsize": 135168 00:24:02.212 } 00:24:02.212 } 00:24:02.212 ] 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "subsystem": "sock", 00:24:02.212 "config": [ 00:24:02.212 { 00:24:02.212 "method": "sock_set_default_impl", 00:24:02.212 "params": { 00:24:02.212 "impl_name": "posix" 00:24:02.212 } 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "method": "sock_impl_set_options", 00:24:02.212 "params": { 00:24:02.212 "impl_name": "ssl", 00:24:02.212 "recv_buf_size": 4096, 00:24:02.212 "send_buf_size": 4096, 00:24:02.212 "enable_recv_pipe": true, 00:24:02.212 "enable_quickack": false, 00:24:02.212 "enable_placement_id": 0, 00:24:02.212 "enable_zerocopy_send_server": true, 00:24:02.212 "enable_zerocopy_send_client": false, 00:24:02.212 "zerocopy_threshold": 0, 00:24:02.212 "tls_version": 0, 00:24:02.212 "enable_ktls": false 00:24:02.212 } 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "method": "sock_impl_set_options", 00:24:02.212 "params": { 00:24:02.212 "impl_name": "posix", 00:24:02.212 "recv_buf_size": 2097152, 00:24:02.212 "send_buf_size": 2097152, 00:24:02.212 "enable_recv_pipe": true, 00:24:02.212 "enable_quickack": false, 00:24:02.212 "enable_placement_id": 0, 00:24:02.212 "enable_zerocopy_send_server": true, 00:24:02.212 "enable_zerocopy_send_client": false, 00:24:02.212 "zerocopy_threshold": 0, 00:24:02.212 "tls_version": 0, 00:24:02.212 "enable_ktls": false 00:24:02.212 } 00:24:02.212 } 00:24:02.212 ] 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "subsystem": "vmd", 00:24:02.212 "config": [] 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "subsystem": "accel", 00:24:02.212 "config": [ 00:24:02.212 { 00:24:02.212 "method": "accel_set_options", 00:24:02.212 "params": { 00:24:02.212 "small_cache_size": 128, 00:24:02.212 "large_cache_size": 16, 00:24:02.212 "task_count": 2048, 00:24:02.212 "sequence_count": 2048, 00:24:02.212 "buf_count": 2048 00:24:02.212 } 00:24:02.212 } 00:24:02.212 ] 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "subsystem": "bdev", 00:24:02.212 "config": [ 00:24:02.212 { 00:24:02.212 "method": "bdev_set_options", 00:24:02.212 "params": { 00:24:02.212 "bdev_io_pool_size": 65535, 00:24:02.212 "bdev_io_cache_size": 256, 00:24:02.212 "bdev_auto_examine": true, 00:24:02.212 "iobuf_small_cache_size": 128, 00:24:02.212 "iobuf_large_cache_size": 16 00:24:02.212 } 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "method": "bdev_raid_set_options", 00:24:02.212 "params": { 00:24:02.212 "process_window_size_kb": 1024, 00:24:02.212 "process_max_bandwidth_mb_sec": 0 00:24:02.212 } 00:24:02.212 }, 00:24:02.212 { 00:24:02.212 "method": "bdev_iscsi_set_options", 00:24:02.212 "params": { 00:24:02.212 "timeout_sec": 30 00:24:02.212 } 00:24:02.212 }, 00:24:02.213 { 00:24:02.213 "method": "bdev_nvme_set_options", 00:24:02.213 "params": { 00:24:02.213 "action_on_timeout": "none", 00:24:02.213 "timeout_us": 0, 00:24:02.213 "timeout_admin_us": 0, 00:24:02.213 "keep_alive_timeout_ms": 10000, 00:24:02.213 "arbitration_burst": 0, 00:24:02.213 "low_priority_weight": 0, 00:24:02.213 "medium_priority_weight": 0, 00:24:02.213 "high_priority_weight": 0, 00:24:02.213 "nvme_adminq_poll_period_us": 10000, 00:24:02.213 "nvme_ioq_poll_period_us": 0, 00:24:02.213 "io_queue_requests": 0, 00:24:02.213 "delay_cmd_submit": true, 00:24:02.213 "transport_retry_count": 4, 00:24:02.213 "bdev_retry_count": 3, 00:24:02.213 "transport_ack_timeout": 0, 00:24:02.213 "ctrlr_loss_timeout_sec": 0, 00:24:02.213 "reconnect_delay_sec": 0, 00:24:02.213 "fast_io_fail_timeout_sec": 0, 00:24:02.213 "disable_auto_failback": false, 00:24:02.213 "generate_uuids": false, 00:24:02.213 "transport_tos": 0, 00:24:02.213 "nvme_error_stat": false, 00:24:02.213 "rdma_srq_size": 0, 00:24:02.213 "io_path_stat": false, 00:24:02.213 "allow_accel_sequence": false, 00:24:02.213 "rdma_max_cq_size": 0, 00:24:02.213 "rdma_cm_event_timeout_ms": 0, 00:24:02.213 "dhchap_digests": [ 00:24:02.213 "sha256", 00:24:02.213 "sha384", 00:24:02.213 "sha512" 00:24:02.213 ], 00:24:02.213 "dhchap_dhgroups": [ 00:24:02.213 "null", 00:24:02.213 "ffdhe2048", 00:24:02.213 "ffdhe3072", 00:24:02.213 "ffdhe4096", 00:24:02.213 "ffdhe6144", 00:24:02.213 "ffdhe8192" 00:24:02.213 ] 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "bdev_nvme_set_hotplug", 00:24:02.213 "params": { 00:24:02.213 "period_us": 100000, 00:24:02.213 "enable": false 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "bdev_malloc_create", 00:24:02.213 "params": { 00:24:02.213 "name": "malloc0", 00:24:02.213 "num_blocks": 8192, 00:24:02.213 "block_size": 4096, 00:24:02.213 "physical_block_size": 4096, 00:24:02.213 "uuid": "42eb35cd-5fbe-4a97-a765-8981ee934ced", 00:24:02.213 "optimal_io_boundary": 0, 00:24:02.213 "md_size": 0, 00:24:02.213 "dif_type": 0, 00:24:02.213 "dif_is_head_of_md": false, 00:24:02.213 "dif_pi_format": 0 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "bdev_wait_for_examine" 00:24:02.213 } 00:24:02.213 ] 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "subsystem": "nbd", 00:24:02.213 "config": [] 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "subsystem": "scheduler", 00:24:02.213 "config": [ 00:24:02.213 { 00:24:02.213 "method": "framework_set_scheduler", 00:24:02.213 "params": { 00:24:02.213 "name": "static" 00:24:02.213 } 00:24:02.213 } 00:24:02.213 ] 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "subsystem": "nvmf", 00:24:02.213 "config": [ 00:24:02.213 { 00:24:02.213 "method": "nvmf_set_config", 00:24:02.213 "params": { 00:24:02.213 "discovery_filter": "match_any", 00:24:02.213 "admin_cmd_passthru": { 00:24:02.213 "identify_ctrlr": false 00:24:02.213 }, 00:24:02.213 "dhchap_digests": [ 00:24:02.213 "sha256", 00:24:02.213 "sha384", 00:24:02.213 "sha512" 00:24:02.213 ], 00:24:02.213 "dhchap_dhgroups": [ 00:24:02.213 "null", 00:24:02.213 "ffdhe2048", 00:24:02.213 "ffdhe3072", 00:24:02.213 "ffdhe4096", 00:24:02.213 "ffdhe6144", 00:24:02.213 "ffdhe8192" 00:24:02.213 ] 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_set_max_subsystems", 00:24:02.213 "params": { 00:24:02.213 "max_subsystems": 1024 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_set_crdt", 00:24:02.213 "params": { 00:24:02.213 "crdt1": 0, 00:24:02.213 "crdt2": 0, 00:24:02.213 "crdt3": 0 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_create_transport", 00:24:02.213 "params": { 00:24:02.213 "trtype": "TCP", 00:24:02.213 "max_queue_depth": 128, 00:24:02.213 "max_io_qpairs_per_ctrlr": 127, 00:24:02.213 "in_capsule_data_size": 4096, 00:24:02.213 "max_io_size": 131072, 00:24:02.213 "io_unit_size": 131072, 00:24:02.213 "max_aq_depth": 128, 00:24:02.213 "num_shared_buffers": 511, 00:24:02.213 "buf_cache_size": 4294967295, 00:24:02.213 "dif_insert_or_strip": false, 00:24:02.213 "zcopy": false, 00:24:02.213 "c2h_success": false, 00:24:02.213 "sock_priority": 0, 00:24:02.213 "abort_timeout_sec": 1, 00:24:02.213 "ack_timeout": 0, 00:24:02.213 "data_wr_pool_size": 0 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_create_subsystem", 00:24:02.213 "params": { 00:24:02.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.213 "allow_any_host": false, 00:24:02.213 "serial_number": "00000000000000000000", 00:24:02.213 "model_number": "SPDK bdev Controller", 00:24:02.213 "max_namespaces": 32, 00:24:02.213 "min_cntlid": 1, 00:24:02.213 "max_cntlid": 65519, 00:24:02.213 "ana_reporting": false 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_subsystem_add_host", 00:24:02.213 "params": { 00:24:02.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.213 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.213 "psk": "key0" 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_subsystem_add_ns", 00:24:02.213 "params": { 00:24:02.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.213 "namespace": { 00:24:02.213 "nsid": 1, 00:24:02.213 "bdev_name": "malloc0", 00:24:02.213 "nguid": "42EB35CD5FBE4A97A7658981EE934CED", 00:24:02.213 "uuid": "42eb35cd-5fbe-4a97-a765-8981ee934ced", 00:24:02.213 "no_auto_visible": false 00:24:02.213 } 00:24:02.213 } 00:24:02.213 }, 00:24:02.213 { 00:24:02.213 "method": "nvmf_subsystem_add_listener", 00:24:02.213 "params": { 00:24:02.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.213 "listen_address": { 00:24:02.213 "trtype": "TCP", 00:24:02.213 "adrfam": "IPv4", 00:24:02.213 "traddr": "10.0.0.2", 00:24:02.213 "trsvcid": "4420" 00:24:02.213 }, 00:24:02.213 "secure_channel": false, 00:24:02.213 "sock_impl": "ssl" 00:24:02.213 } 00:24:02.213 } 00:24:02.213 ] 00:24:02.213 } 00:24:02.213 ] 00:24:02.213 }' 00:24:02.213 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:02.214 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:02.214 "subsystems": [ 00:24:02.214 { 00:24:02.214 "subsystem": "keyring", 00:24:02.214 "config": [ 00:24:02.214 { 00:24:02.214 "method": "keyring_file_add_key", 00:24:02.214 "params": { 00:24:02.214 "name": "key0", 00:24:02.214 "path": "/tmp/tmp.4wINg4DIn2" 00:24:02.214 } 00:24:02.214 } 00:24:02.214 ] 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "subsystem": "iobuf", 00:24:02.214 "config": [ 00:24:02.214 { 00:24:02.214 "method": "iobuf_set_options", 00:24:02.214 "params": { 00:24:02.214 "small_pool_count": 8192, 00:24:02.214 "large_pool_count": 1024, 00:24:02.214 "small_bufsize": 8192, 00:24:02.214 "large_bufsize": 135168 00:24:02.214 } 00:24:02.214 } 00:24:02.214 ] 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "subsystem": "sock", 00:24:02.214 "config": [ 00:24:02.214 { 00:24:02.214 "method": "sock_set_default_impl", 00:24:02.214 "params": { 00:24:02.214 "impl_name": "posix" 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "sock_impl_set_options", 00:24:02.214 "params": { 00:24:02.214 "impl_name": "ssl", 00:24:02.214 "recv_buf_size": 4096, 00:24:02.214 "send_buf_size": 4096, 00:24:02.214 "enable_recv_pipe": true, 00:24:02.214 "enable_quickack": false, 00:24:02.214 "enable_placement_id": 0, 00:24:02.214 "enable_zerocopy_send_server": true, 00:24:02.214 "enable_zerocopy_send_client": false, 00:24:02.214 "zerocopy_threshold": 0, 00:24:02.214 "tls_version": 0, 00:24:02.214 "enable_ktls": false 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "sock_impl_set_options", 00:24:02.214 "params": { 00:24:02.214 "impl_name": "posix", 00:24:02.214 "recv_buf_size": 2097152, 00:24:02.214 "send_buf_size": 2097152, 00:24:02.214 "enable_recv_pipe": true, 00:24:02.214 "enable_quickack": false, 00:24:02.214 "enable_placement_id": 0, 00:24:02.214 "enable_zerocopy_send_server": true, 00:24:02.214 "enable_zerocopy_send_client": false, 00:24:02.214 "zerocopy_threshold": 0, 00:24:02.214 "tls_version": 0, 00:24:02.214 "enable_ktls": false 00:24:02.214 } 00:24:02.214 } 00:24:02.214 ] 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "subsystem": "vmd", 00:24:02.214 "config": [] 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "subsystem": "accel", 00:24:02.214 "config": [ 00:24:02.214 { 00:24:02.214 "method": "accel_set_options", 00:24:02.214 "params": { 00:24:02.214 "small_cache_size": 128, 00:24:02.214 "large_cache_size": 16, 00:24:02.214 "task_count": 2048, 00:24:02.214 "sequence_count": 2048, 00:24:02.214 "buf_count": 2048 00:24:02.214 } 00:24:02.214 } 00:24:02.214 ] 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "subsystem": "bdev", 00:24:02.214 "config": [ 00:24:02.214 { 00:24:02.214 "method": "bdev_set_options", 00:24:02.214 "params": { 00:24:02.214 "bdev_io_pool_size": 65535, 00:24:02.214 "bdev_io_cache_size": 256, 00:24:02.214 "bdev_auto_examine": true, 00:24:02.214 "iobuf_small_cache_size": 128, 00:24:02.214 "iobuf_large_cache_size": 16 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "bdev_raid_set_options", 00:24:02.214 "params": { 00:24:02.214 "process_window_size_kb": 1024, 00:24:02.214 "process_max_bandwidth_mb_sec": 0 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "bdev_iscsi_set_options", 00:24:02.214 "params": { 00:24:02.214 "timeout_sec": 30 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "bdev_nvme_set_options", 00:24:02.214 "params": { 00:24:02.214 "action_on_timeout": "none", 00:24:02.214 "timeout_us": 0, 00:24:02.214 "timeout_admin_us": 0, 00:24:02.214 "keep_alive_timeout_ms": 10000, 00:24:02.214 "arbitration_burst": 0, 00:24:02.214 "low_priority_weight": 0, 00:24:02.214 "medium_priority_weight": 0, 00:24:02.214 "high_priority_weight": 0, 00:24:02.214 "nvme_adminq_poll_period_us": 10000, 00:24:02.214 "nvme_ioq_poll_period_us": 0, 00:24:02.214 "io_queue_requests": 512, 00:24:02.214 "delay_cmd_submit": true, 00:24:02.214 "transport_retry_count": 4, 00:24:02.214 "bdev_retry_count": 3, 00:24:02.214 "transport_ack_timeout": 0, 00:24:02.214 "ctrlr_loss_timeout_sec": 0, 00:24:02.214 "reconnect_delay_sec": 0, 00:24:02.214 "fast_io_fail_timeout_sec": 0, 00:24:02.214 "disable_auto_failback": false, 00:24:02.214 "generate_uuids": false, 00:24:02.214 "transport_tos": 0, 00:24:02.214 "nvme_error_stat": false, 00:24:02.214 "rdma_srq_size": 0, 00:24:02.214 "io_path_stat": false, 00:24:02.214 "allow_accel_sequence": false, 00:24:02.214 "rdma_max_cq_size": 0, 00:24:02.214 "rdma_cm_event_timeout_ms": 0, 00:24:02.214 "dhchap_digests": [ 00:24:02.214 "sha256", 00:24:02.214 "sha384", 00:24:02.214 "sha512" 00:24:02.214 ], 00:24:02.214 "dhchap_dhgroups": [ 00:24:02.214 "null", 00:24:02.214 "ffdhe2048", 00:24:02.214 "ffdhe3072", 00:24:02.214 "ffdhe4096", 00:24:02.214 "ffdhe6144", 00:24:02.214 "ffdhe8192" 00:24:02.214 ] 00:24:02.214 } 00:24:02.214 }, 00:24:02.214 { 00:24:02.214 "method": "bdev_nvme_attach_controller", 00:24:02.214 "params": { 00:24:02.214 "name": "nvme0", 00:24:02.214 "trtype": "TCP", 00:24:02.214 "adrfam": "IPv4", 00:24:02.214 "traddr": "10.0.0.2", 00:24:02.214 "trsvcid": "4420", 00:24:02.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.214 "prchk_reftag": false, 00:24:02.215 "prchk_guard": false, 00:24:02.215 "ctrlr_loss_timeout_sec": 0, 00:24:02.215 "reconnect_delay_sec": 0, 00:24:02.215 "fast_io_fail_timeout_sec": 0, 00:24:02.215 "psk": "key0", 00:24:02.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.215 "hdgst": false, 00:24:02.215 "ddgst": false 00:24:02.215 } 00:24:02.215 }, 00:24:02.215 { 00:24:02.215 "method": "bdev_nvme_set_hotplug", 00:24:02.215 "params": { 00:24:02.215 "period_us": 100000, 00:24:02.215 "enable": false 00:24:02.215 } 00:24:02.215 }, 00:24:02.215 { 00:24:02.215 "method": "bdev_enable_histogram", 00:24:02.215 "params": { 00:24:02.215 "name": "nvme0n1", 00:24:02.215 "enable": true 00:24:02.215 } 00:24:02.215 }, 00:24:02.215 { 00:24:02.215 "method": "bdev_wait_for_examine" 00:24:02.215 } 00:24:02.215 ] 00:24:02.215 }, 00:24:02.215 { 00:24:02.215 "subsystem": "nbd", 00:24:02.215 "config": [] 00:24:02.215 } 00:24:02.215 ] 00:24:02.215 }' 00:24:02.215 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3559230 00:24:02.215 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3559230 ']' 00:24:02.215 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3559230 00:24:02.215 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:02.215 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3559230 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3559230' 00:24:02.477 killing process with pid 3559230 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3559230 00:24:02.477 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.477 00:24:02.477 Latency(us) 00:24:02.477 [2024-10-12T20:12:20.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.477 [2024-10-12T20:12:20.966Z] =================================================================================================================== 00:24:02.477 [2024-10-12T20:12:20.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3559230 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3559150 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3559150 ']' 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3559150 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3559150 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3559150' 00:24:02.477 killing process with pid 3559150 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3559150 00:24:02.477 22:12:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3559150 00:24:02.739 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:02.739 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:02.739 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.739 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:02.739 "subsystems": [ 00:24:02.739 { 00:24:02.739 "subsystem": "keyring", 00:24:02.739 "config": [ 00:24:02.739 { 00:24:02.739 "method": "keyring_file_add_key", 00:24:02.739 "params": { 00:24:02.739 "name": "key0", 00:24:02.739 "path": "/tmp/tmp.4wINg4DIn2" 00:24:02.739 } 00:24:02.739 } 00:24:02.739 ] 00:24:02.739 }, 00:24:02.739 { 00:24:02.739 "subsystem": "iobuf", 00:24:02.739 "config": [ 00:24:02.739 { 00:24:02.739 "method": "iobuf_set_options", 00:24:02.739 "params": { 00:24:02.739 "small_pool_count": 8192, 00:24:02.739 "large_pool_count": 1024, 00:24:02.739 "small_bufsize": 8192, 00:24:02.739 "large_bufsize": 135168 00:24:02.739 } 00:24:02.739 } 00:24:02.739 ] 00:24:02.739 }, 00:24:02.739 { 00:24:02.739 "subsystem": "sock", 00:24:02.739 "config": [ 00:24:02.739 { 00:24:02.739 "method": "sock_set_default_impl", 00:24:02.739 "params": { 00:24:02.739 "impl_name": "posix" 00:24:02.739 } 00:24:02.739 }, 00:24:02.739 { 00:24:02.739 "method": "sock_impl_set_options", 00:24:02.739 "params": { 00:24:02.740 "impl_name": "ssl", 00:24:02.740 "recv_buf_size": 4096, 00:24:02.740 "send_buf_size": 4096, 00:24:02.740 "enable_recv_pipe": true, 00:24:02.740 "enable_quickack": false, 00:24:02.740 "enable_placement_id": 0, 00:24:02.740 "enable_zerocopy_send_server": true, 00:24:02.740 "enable_zerocopy_send_client": false, 00:24:02.740 "zerocopy_threshold": 0, 00:24:02.740 "tls_version": 0, 00:24:02.740 "enable_ktls": false 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "sock_impl_set_options", 00:24:02.740 "params": { 00:24:02.740 "impl_name": "posix", 00:24:02.740 "recv_buf_size": 2097152, 00:24:02.740 "send_buf_size": 2097152, 00:24:02.740 "enable_recv_pipe": true, 00:24:02.740 "enable_quickack": false, 00:24:02.740 "enable_placement_id": 0, 00:24:02.740 "enable_zerocopy_send_server": true, 00:24:02.740 "enable_zerocopy_send_client": false, 00:24:02.740 "zerocopy_threshold": 0, 00:24:02.740 "tls_version": 0, 00:24:02.740 "enable_ktls": false 00:24:02.740 } 00:24:02.740 } 00:24:02.740 ] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "vmd", 00:24:02.740 "config": [] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "accel", 00:24:02.740 "config": [ 00:24:02.740 { 00:24:02.740 "method": "accel_set_options", 00:24:02.740 "params": { 00:24:02.740 "small_cache_size": 128, 00:24:02.740 "large_cache_size": 16, 00:24:02.740 "task_count": 2048, 00:24:02.740 "sequence_count": 2048, 00:24:02.740 "buf_count": 2048 00:24:02.740 } 00:24:02.740 } 00:24:02.740 ] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "bdev", 00:24:02.740 "config": [ 00:24:02.740 { 00:24:02.740 "method": "bdev_set_options", 00:24:02.740 "params": { 00:24:02.740 "bdev_io_pool_size": 65535, 00:24:02.740 "bdev_io_cache_size": 256, 00:24:02.740 "bdev_auto_examine": true, 00:24:02.740 "iobuf_small_cache_size": 128, 00:24:02.740 "iobuf_large_cache_size": 16 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_raid_set_options", 00:24:02.740 "params": { 00:24:02.740 "process_window_size_kb": 1024, 00:24:02.740 "process_max_bandwidth_mb_sec": 0 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_iscsi_set_options", 00:24:02.740 "params": { 00:24:02.740 "timeout_sec": 30 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_nvme_set_options", 00:24:02.740 "params": { 00:24:02.740 "action_on_timeout": "none", 00:24:02.740 "timeout_us": 0, 00:24:02.740 "timeout_admin_us": 0, 00:24:02.740 "keep_alive_timeout_ms": 10000, 00:24:02.740 "arbitration_burst": 0, 00:24:02.740 "low_priority_weight": 0, 00:24:02.740 "medium_priority_weight": 0, 00:24:02.740 "high_priority_weight": 0, 00:24:02.740 "nvme_adminq_poll_period_us": 10000, 00:24:02.740 "nvme_ioq_poll_period_us": 0, 00:24:02.740 "io_queue_requests": 0, 00:24:02.740 "delay_cmd_submit": true, 00:24:02.740 "transport_retry_count": 4, 00:24:02.740 "bdev_retry_count": 3, 00:24:02.740 "transport_ack_timeout": 0, 00:24:02.740 "ctrlr_loss_timeout_sec": 0, 00:24:02.740 "reconnect_delay_sec": 0, 00:24:02.740 "fast_io_fail_timeout_sec": 0, 00:24:02.740 "disable_auto_failback": false, 00:24:02.740 "generate_uuids": false, 00:24:02.740 "transport_tos": 0, 00:24:02.740 "nvme_error_stat": false, 00:24:02.740 "rdma_srq_size": 0, 00:24:02.740 "io_path_stat": false, 00:24:02.740 "allow_accel_sequence": false, 00:24:02.740 "rdma_max_cq_size": 0, 00:24:02.740 "rdma_cm_event_timeout_ms": 0, 00:24:02.740 "dhchap_digests": [ 00:24:02.740 "sha256", 00:24:02.740 "sha384", 00:24:02.740 "sha512" 00:24:02.740 ], 00:24:02.740 "dhchap_dhgroups": [ 00:24:02.740 "null", 00:24:02.740 "ffdhe2048", 00:24:02.740 "ffdhe3072", 00:24:02.740 "ffdhe4096", 00:24:02.740 "ffdhe6144", 00:24:02.740 "ffdhe8192" 00:24:02.740 ] 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_nvme_set_hotplug", 00:24:02.740 "params": { 00:24:02.740 "period_us": 100000, 00:24:02.740 "enable": false 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_malloc_create", 00:24:02.740 "params": { 00:24:02.740 "name": "malloc0", 00:24:02.740 "num_blocks": 8192, 00:24:02.740 "block_size": 4096, 00:24:02.740 "physical_block_size": 4096, 00:24:02.740 "uuid": "42eb35cd-5fbe-4a97-a765-8981ee934ced", 00:24:02.740 "optimal_io_boundary": 0, 00:24:02.740 "md_size": 0, 00:24:02.740 "dif_type": 0, 00:24:02.740 "dif_is_head_of_md": false, 00:24:02.740 "dif_pi_format": 0 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "bdev_wait_for_examine" 00:24:02.740 } 00:24:02.740 ] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "nbd", 00:24:02.740 "config": [] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "scheduler", 00:24:02.740 "config": [ 00:24:02.740 { 00:24:02.740 "method": "framework_set_scheduler", 00:24:02.740 "params": { 00:24:02.740 "name": "static" 00:24:02.740 } 00:24:02.740 } 00:24:02.740 ] 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "subsystem": "nvmf", 00:24:02.740 "config": [ 00:24:02.740 { 00:24:02.740 "method": "nvmf_set_config", 00:24:02.740 "params": { 00:24:02.740 "discovery_filter": "match_any", 00:24:02.740 "admin_cmd_passthru": { 00:24:02.740 "identify_ctrlr": false 00:24:02.740 }, 00:24:02.740 "dhchap_digests": [ 00:24:02.740 "sha256", 00:24:02.740 "sha384", 00:24:02.740 "sha512" 00:24:02.740 ], 00:24:02.740 "dhchap_dhgroups": [ 00:24:02.740 "null", 00:24:02.740 "ffdhe2048", 00:24:02.740 "ffdhe3072", 00:24:02.740 "ffdhe4096", 00:24:02.740 "ffdhe6144", 00:24:02.740 "ffdhe8192" 00:24:02.740 ] 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "nvmf_set_max_subsystems", 00:24:02.740 "params": { 00:24:02.740 "max_subsystems": 1024 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "nvmf_set_crdt", 00:24:02.740 "params": { 00:24:02.740 "crdt1": 0, 00:24:02.740 "crdt2": 0, 00:24:02.740 "crdt3": 0 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "nvmf_create_transport", 00:24:02.740 "params": { 00:24:02.740 "trtype": "TCP", 00:24:02.740 "max_queue_depth": 128, 00:24:02.740 "max_io_qpairs_per_ctrlr": 127, 00:24:02.740 "in_capsule_data_size": 4096, 00:24:02.740 "max_io_size": 131072, 00:24:02.740 "io_unit_size": 131072, 00:24:02.740 "max_aq_depth": 128, 00:24:02.740 "num_shared_buffers": 511, 00:24:02.740 "buf_cache_size": 4294967295, 00:24:02.740 "dif_insert_or_strip": false, 00:24:02.740 "zcopy": false, 00:24:02.740 "c2h_success": false, 00:24:02.740 "sock_priority": 0, 00:24:02.740 "abort_timeout_sec": 1, 00:24:02.740 "ack_timeout": 0, 00:24:02.740 "data_wr_pool_size": 0 00:24:02.740 } 00:24:02.740 }, 00:24:02.740 { 00:24:02.740 "method": "nvmf_create_subsystem", 00:24:02.740 "params": { 00:24:02.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.740 "allow_any_host": false, 00:24:02.740 "serial_number": "00000000000000000000", 00:24:02.740 "model_number": "SPDK bdev Controller", 00:24:02.740 "max_namespaces": 32, 00:24:02.741 "min_cntlid": 1, 00:24:02.741 "max_cntlid": 65519, 00:24:02.741 "ana_reporting": false 00:24:02.741 } 00:24:02.741 }, 00:24:02.741 { 00:24:02.741 "method": "nvmf_subsystem_add_host", 00:24:02.741 "params": { 00:24:02.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.741 "host": "nqn.2016-06.io.spdk:host1", 00:24:02.741 "psk": "key0" 00:24:02.741 } 00:24:02.741 }, 00:24:02.741 { 00:24:02.741 "method": "nvmf_subsystem_add_ns", 00:24:02.741 "params": { 00:24:02.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.741 "namespace": { 00:24:02.741 "nsid": 1, 00:24:02.741 "bdev_name": "malloc0", 00:24:02.741 "nguid": "42EB35CD5FBE4A97A7658981EE934CED", 00:24:02.741 "uuid": "42eb35cd-5fbe-4a97-a765-8981ee934ced", 00:24:02.741 "no_auto_visible": false 00:24:02.741 } 00:24:02.741 } 00:24:02.741 }, 00:24:02.741 { 00:24:02.741 "method": "nvmf_subsystem_add_listener", 00:24:02.741 "params": { 00:24:02.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.741 "listen_address": { 00:24:02.741 "trtype": "TCP", 00:24:02.741 "adrfam": "IPv4", 00:24:02.741 "traddr": "10.0.0.2", 00:24:02.741 "trsvcid": "4420" 00:24:02.741 }, 00:24:02.741 "secure_channel": false, 00:24:02.741 "sock_impl": "ssl" 00:24:02.741 } 00:24:02.741 } 00:24:02.741 ] 00:24:02.741 } 00:24:02.741 ] 00:24:02.741 }' 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3559852 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3559852 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3559852 ']' 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.741 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:02.741 [2024-10-12 22:12:21.121304] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:02.741 [2024-10-12 22:12:21.121362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.741 [2024-10-12 22:12:21.205603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.002 [2024-10-12 22:12:21.233549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.002 [2024-10-12 22:12:21.233579] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.002 [2024-10-12 22:12:21.233585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.002 [2024-10-12 22:12:21.233590] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.002 [2024-10-12 22:12:21.233594] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.002 [2024-10-12 22:12:21.233636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.002 [2024-10-12 22:12:21.433152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.002 [2024-10-12 22:12:21.465123] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.002 [2024-10-12 22:12:21.465303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3559935 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3559935 /var/tmp/bdevperf.sock 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3559935 ']' 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.576 22:12:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:03.576 "subsystems": [ 00:24:03.576 { 00:24:03.576 "subsystem": "keyring", 00:24:03.576 "config": [ 00:24:03.576 { 00:24:03.576 "method": "keyring_file_add_key", 00:24:03.576 "params": { 00:24:03.576 "name": "key0", 00:24:03.576 "path": "/tmp/tmp.4wINg4DIn2" 00:24:03.576 } 00:24:03.576 } 00:24:03.576 ] 00:24:03.576 }, 00:24:03.576 { 00:24:03.576 "subsystem": "iobuf", 00:24:03.576 "config": [ 00:24:03.576 { 00:24:03.577 "method": "iobuf_set_options", 00:24:03.577 "params": { 00:24:03.577 "small_pool_count": 8192, 00:24:03.577 "large_pool_count": 1024, 00:24:03.577 "small_bufsize": 8192, 00:24:03.577 "large_bufsize": 135168 00:24:03.577 } 00:24:03.577 } 00:24:03.577 ] 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "subsystem": "sock", 00:24:03.577 "config": [ 00:24:03.577 { 00:24:03.577 "method": "sock_set_default_impl", 00:24:03.577 "params": { 00:24:03.577 "impl_name": "posix" 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "sock_impl_set_options", 00:24:03.577 "params": { 00:24:03.577 "impl_name": "ssl", 00:24:03.577 "recv_buf_size": 4096, 00:24:03.577 "send_buf_size": 4096, 00:24:03.577 "enable_recv_pipe": true, 00:24:03.577 "enable_quickack": false, 00:24:03.577 "enable_placement_id": 0, 00:24:03.577 "enable_zerocopy_send_server": true, 00:24:03.577 "enable_zerocopy_send_client": false, 00:24:03.577 "zerocopy_threshold": 0, 00:24:03.577 "tls_version": 0, 00:24:03.577 "enable_ktls": false 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "sock_impl_set_options", 00:24:03.577 "params": { 00:24:03.577 "impl_name": "posix", 00:24:03.577 "recv_buf_size": 2097152, 00:24:03.577 "send_buf_size": 2097152, 00:24:03.577 "enable_recv_pipe": true, 00:24:03.577 "enable_quickack": false, 00:24:03.577 "enable_placement_id": 0, 00:24:03.577 "enable_zerocopy_send_server": true, 00:24:03.577 "enable_zerocopy_send_client": false, 00:24:03.577 "zerocopy_threshold": 0, 00:24:03.577 "tls_version": 0, 00:24:03.577 "enable_ktls": false 00:24:03.577 } 00:24:03.577 } 00:24:03.577 ] 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "subsystem": "vmd", 00:24:03.577 "config": [] 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "subsystem": "accel", 00:24:03.577 "config": [ 00:24:03.577 { 00:24:03.577 "method": "accel_set_options", 00:24:03.577 "params": { 00:24:03.577 "small_cache_size": 128, 00:24:03.577 "large_cache_size": 16, 00:24:03.577 "task_count": 2048, 00:24:03.577 "sequence_count": 2048, 00:24:03.577 "buf_count": 2048 00:24:03.577 } 00:24:03.577 } 00:24:03.577 ] 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "subsystem": "bdev", 00:24:03.577 "config": [ 00:24:03.577 { 00:24:03.577 "method": "bdev_set_options", 00:24:03.577 "params": { 00:24:03.577 "bdev_io_pool_size": 65535, 00:24:03.577 "bdev_io_cache_size": 256, 00:24:03.577 "bdev_auto_examine": true, 00:24:03.577 "iobuf_small_cache_size": 128, 00:24:03.577 "iobuf_large_cache_size": 16 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_raid_set_options", 00:24:03.577 "params": { 00:24:03.577 "process_window_size_kb": 1024, 00:24:03.577 "process_max_bandwidth_mb_sec": 0 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_iscsi_set_options", 00:24:03.577 "params": { 00:24:03.577 "timeout_sec": 30 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_nvme_set_options", 00:24:03.577 "params": { 00:24:03.577 "action_on_timeout": "none", 00:24:03.577 "timeout_us": 0, 00:24:03.577 "timeout_admin_us": 0, 00:24:03.577 "keep_alive_timeout_ms": 10000, 00:24:03.577 "arbitration_burst": 0, 00:24:03.577 "low_priority_weight": 0, 00:24:03.577 "medium_priority_weight": 0, 00:24:03.577 "high_priority_weight": 0, 00:24:03.577 "nvme_adminq_poll_period_us": 10000, 00:24:03.577 "nvme_ioq_poll_period_us": 0, 00:24:03.577 "io_queue_requests": 512, 00:24:03.577 "delay_cmd_submit": true, 00:24:03.577 "transport_retry_count": 4, 00:24:03.577 "bdev_retry_count": 3, 00:24:03.577 "transport_ack_timeout": 0, 00:24:03.577 "ctrlr_loss_timeout_sec": 0, 00:24:03.577 "reconnect_delay_sec": 0, 00:24:03.577 "fast_io_fail_timeout_sec": 0, 00:24:03.577 "disable_auto_failback": false, 00:24:03.577 "generate_uuids": false, 00:24:03.577 "transport_tos": 0, 00:24:03.577 "nvme_error_stat": false, 00:24:03.577 "rdma_srq_size": 0, 00:24:03.577 "io_path_stat": false, 00:24:03.577 "allow_accel_sequence": false, 00:24:03.577 "rdma_max_cq_size": 0, 00:24:03.577 "rdma_cm_event_timeout_ms": 0, 00:24:03.577 "dhchap_digests": [ 00:24:03.577 "sha256", 00:24:03.577 "sha384", 00:24:03.577 "sha512" 00:24:03.577 ], 00:24:03.577 "dhchap_dhgroups": [ 00:24:03.577 "null", 00:24:03.577 "ffdhe2048", 00:24:03.577 "ffdhe3072", 00:24:03.577 "ffdhe4096", 00:24:03.577 "ffdhe6144", 00:24:03.577 "ffdhe8192" 00:24:03.577 ] 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_nvme_attach_controller", 00:24:03.577 "params": { 00:24:03.577 "name": "nvme0", 00:24:03.577 "trtype": "TCP", 00:24:03.577 "adrfam": "IPv4", 00:24:03.577 "traddr": "10.0.0.2", 00:24:03.577 "trsvcid": "4420", 00:24:03.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.577 "prchk_reftag": false, 00:24:03.577 "prchk_guard": false, 00:24:03.577 "ctrlr_loss_timeout_sec": 0, 00:24:03.577 "reconnect_delay_sec": 0, 00:24:03.577 "fast_io_fail_timeout_sec": 0, 00:24:03.577 "psk": "key0", 00:24:03.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.577 "hdgst": false, 00:24:03.577 "ddgst": false 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_nvme_set_hotplug", 00:24:03.577 "params": { 00:24:03.577 "period_us": 100000, 00:24:03.577 "enable": false 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_enable_histogram", 00:24:03.577 "params": { 00:24:03.577 "name": "nvme0n1", 00:24:03.577 "enable": true 00:24:03.577 } 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "method": "bdev_wait_for_examine" 00:24:03.577 } 00:24:03.577 ] 00:24:03.577 }, 00:24:03.577 { 00:24:03.577 "subsystem": "nbd", 00:24:03.577 "config": [] 00:24:03.577 } 00:24:03.577 ] 00:24:03.577 }' 00:24:03.577 [2024-10-12 22:12:21.994600] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:03.577 [2024-10-12 22:12:21.994654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559935 ] 00:24:03.839 [2024-10-12 22:12:22.070778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.839 [2024-10-12 22:12:22.099500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.839 [2024-10-12 22:12:22.228423] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.412 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.412 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:04.412 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:04.412 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.673 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.673 22:12:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:04.673 Running I/O for 1 seconds... 00:24:05.617 5384.00 IOPS, 21.03 MiB/s 00:24:05.617 Latency(us) 00:24:05.617 [2024-10-12T20:12:24.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.617 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:05.617 Verification LBA range: start 0x0 length 0x2000 00:24:05.617 nvme0n1 : 1.01 5452.52 21.30 0.00 0.00 23338.12 4450.99 31894.19 00:24:05.617 [2024-10-12T20:12:24.106Z] =================================================================================================================== 00:24:05.617 [2024-10-12T20:12:24.106Z] Total : 5452.52 21.30 0.00 0.00 23338.12 4450.99 31894.19 00:24:05.617 { 00:24:05.617 "results": [ 00:24:05.617 { 00:24:05.617 "job": "nvme0n1", 00:24:05.617 "core_mask": "0x2", 00:24:05.617 "workload": "verify", 00:24:05.617 "status": "finished", 00:24:05.617 "verify_range": { 00:24:05.617 "start": 0, 00:24:05.617 "length": 8192 00:24:05.617 }, 00:24:05.617 "queue_depth": 128, 00:24:05.617 "io_size": 4096, 00:24:05.617 "runtime": 1.011093, 00:24:05.617 "iops": 5452.515248350053, 00:24:05.617 "mibps": 21.298887688867396, 00:24:05.617 "io_failed": 0, 00:24:05.617 "io_timeout": 0, 00:24:05.617 "avg_latency_us": 23338.118415865527, 00:24:05.617 "min_latency_us": 4450.986666666667, 00:24:05.617 "max_latency_us": 31894.18666666667 00:24:05.617 } 00:24:05.617 ], 00:24:05.617 "core_count": 1 00:24:05.617 } 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:05.617 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.617 nvmf_trace.0 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3559935 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3559935 ']' 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3559935 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3559935 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3559935' 00:24:05.880 killing process with pid 3559935 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3559935 00:24:05.880 Received shutdown signal, test time was about 1.000000 seconds 00:24:05.880 00:24:05.880 Latency(us) 00:24:05.880 [2024-10-12T20:12:24.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.880 [2024-10-12T20:12:24.369Z] =================================================================================================================== 00:24:05.880 [2024-10-12T20:12:24.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3559935 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.880 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.880 rmmod nvme_tcp 00:24:06.141 rmmod nvme_fabrics 00:24:06.141 rmmod nvme_keyring 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3559852 ']' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3559852 ']' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3559852' 00:24:06.141 killing process with pid 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3559852 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.141 22:12:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BySZAEN6VA /tmp/tmp.JAM0IBa7AE /tmp/tmp.4wINg4DIn2 00:24:08.690 00:24:08.690 real 1m27.193s 00:24:08.690 user 2m15.256s 00:24:08.690 sys 0m28.807s 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.690 ************************************ 00:24:08.690 END TEST nvmf_tls 00:24:08.690 ************************************ 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.690 ************************************ 00:24:08.690 START TEST nvmf_fips 00:24:08.690 ************************************ 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.690 * Looking for test storage... 00:24:08.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.690 --rc genhtml_branch_coverage=1 00:24:08.690 --rc genhtml_function_coverage=1 00:24:08.690 --rc genhtml_legend=1 00:24:08.690 --rc geninfo_all_blocks=1 00:24:08.690 --rc geninfo_unexecuted_blocks=1 00:24:08.690 00:24:08.690 ' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.690 --rc genhtml_branch_coverage=1 00:24:08.690 --rc genhtml_function_coverage=1 00:24:08.690 --rc genhtml_legend=1 00:24:08.690 --rc geninfo_all_blocks=1 00:24:08.690 --rc geninfo_unexecuted_blocks=1 00:24:08.690 00:24:08.690 ' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.690 --rc genhtml_branch_coverage=1 00:24:08.690 --rc genhtml_function_coverage=1 00:24:08.690 --rc genhtml_legend=1 00:24:08.690 --rc geninfo_all_blocks=1 00:24:08.690 --rc geninfo_unexecuted_blocks=1 00:24:08.690 00:24:08.690 ' 00:24:08.690 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.690 --rc genhtml_branch_coverage=1 00:24:08.690 --rc genhtml_function_coverage=1 00:24:08.691 --rc genhtml_legend=1 00:24:08.691 --rc geninfo_all_blocks=1 00:24:08.691 --rc geninfo_unexecuted_blocks=1 00:24:08.691 00:24:08.691 ' 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:08.691 22:12:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:08.691 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:08.692 Error setting digest 00:24:08.692 400290701E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:08.692 400290701E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.692 22:12:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.836 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.837 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.837 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.837 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.837 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:24:16.837 00:24:16.837 --- 10.0.0.2 ping statistics --- 00:24:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.837 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:24:16.837 00:24:16.837 --- 10.0.0.1 ping statistics --- 00:24:16.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.837 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3564639 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3564639 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3564639 ']' 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.837 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.838 [2024-10-12 22:12:34.744848] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:16.838 [2024-10-12 22:12:34.744913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.838 [2024-10-12 22:12:34.811533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.838 [2024-10-12 22:12:34.855736] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.838 [2024-10-12 22:12:34.855788] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.838 [2024-10-12 22:12:34.855797] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.838 [2024-10-12 22:12:34.855803] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.838 [2024-10-12 22:12:34.855808] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.838 [2024-10-12 22:12:34.855832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.838 22:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.a1P 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.a1P 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.a1P 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.a1P 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.838 [2024-10-12 22:12:35.183195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.838 [2024-10-12 22:12:35.199205] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.838 [2024-10-12 22:12:35.199508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.838 malloc0 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3564898 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3564898 /var/tmp/bdevperf.sock 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3564898 ']' 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.838 22:12:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:17.099 [2024-10-12 22:12:35.359049] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:17.099 [2024-10-12 22:12:35.359142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564898 ] 00:24:17.099 [2024-10-12 22:12:35.442688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.099 [2024-10-12 22:12:35.491917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.671 22:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.671 22:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:17.671 22:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.a1P 00:24:17.932 22:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.193 [2024-10-12 22:12:36.487459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.193 TLSTESTn1 00:24:18.193 22:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:18.454 Running I/O for 10 seconds... 00:24:20.355 4489.00 IOPS, 17.54 MiB/s [2024-10-12T20:12:39.803Z] 4463.00 IOPS, 17.43 MiB/s [2024-10-12T20:12:40.747Z] 4527.67 IOPS, 17.69 MiB/s [2024-10-12T20:12:42.132Z] 4633.75 IOPS, 18.10 MiB/s [2024-10-12T20:12:42.706Z] 4737.80 IOPS, 18.51 MiB/s [2024-10-12T20:12:44.093Z] 4769.00 IOPS, 18.63 MiB/s [2024-10-12T20:12:45.034Z] 4833.57 IOPS, 18.88 MiB/s [2024-10-12T20:12:45.976Z] 4834.50 IOPS, 18.88 MiB/s [2024-10-12T20:12:46.917Z] 4826.11 IOPS, 18.85 MiB/s [2024-10-12T20:12:46.917Z] 4859.80 IOPS, 18.98 MiB/s 00:24:28.428 Latency(us) 00:24:28.428 [2024-10-12T20:12:46.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.428 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.428 Verification LBA range: start 0x0 length 0x2000 00:24:28.428 TLSTESTn1 : 10.02 4863.68 19.00 0.00 0.00 26277.06 5215.57 48933.55 00:24:28.428 [2024-10-12T20:12:46.917Z] =================================================================================================================== 00:24:28.428 [2024-10-12T20:12:46.917Z] Total : 4863.68 19.00 0.00 0.00 26277.06 5215.57 48933.55 00:24:28.428 { 00:24:28.428 "results": [ 00:24:28.428 { 00:24:28.428 "job": "TLSTESTn1", 00:24:28.428 "core_mask": "0x4", 00:24:28.428 "workload": "verify", 00:24:28.428 "status": "finished", 00:24:28.428 "verify_range": { 00:24:28.428 "start": 0, 00:24:28.428 "length": 8192 00:24:28.428 }, 00:24:28.428 "queue_depth": 128, 00:24:28.428 "io_size": 4096, 00:24:28.428 "runtime": 10.018335, 00:24:28.428 "iops": 4863.68243824947, 00:24:28.428 "mibps": 18.99875952441199, 00:24:28.428 "io_failed": 0, 00:24:28.428 "io_timeout": 0, 00:24:28.428 "avg_latency_us": 26277.055815238957, 00:24:28.428 "min_latency_us": 5215.573333333334, 00:24:28.428 "max_latency_us": 48933.54666666667 00:24:28.428 } 00:24:28.428 ], 00:24:28.428 "core_count": 1 00:24:28.428 } 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:28.428 nvmf_trace.0 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3564898 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3564898 ']' 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3564898 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3564898 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3564898' 00:24:28.428 killing process with pid 3564898 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3564898 00:24:28.428 Received shutdown signal, test time was about 10.000000 seconds 00:24:28.428 00:24:28.428 Latency(us) 00:24:28.428 [2024-10-12T20:12:46.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.428 [2024-10-12T20:12:46.917Z] =================================================================================================================== 00:24:28.428 [2024-10-12T20:12:46.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.428 22:12:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3564898 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.689 rmmod nvme_tcp 00:24:28.689 rmmod nvme_fabrics 00:24:28.689 rmmod nvme_keyring 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3564639 ']' 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3564639 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3564639 ']' 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3564639 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3564639 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3564639' 00:24:28.689 killing process with pid 3564639 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3564639 00:24:28.689 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3564639 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.950 22:12:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.a1P 00:24:31.496 00:24:31.496 real 0m22.601s 00:24:31.496 user 0m23.672s 00:24:31.496 sys 0m10.012s 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:31.496 ************************************ 00:24:31.496 END TEST nvmf_fips 00:24:31.496 ************************************ 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.496 ************************************ 00:24:31.496 START TEST nvmf_control_msg_list 00:24:31.496 ************************************ 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:31.496 * Looking for test storage... 00:24:31.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:31.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.496 --rc genhtml_branch_coverage=1 00:24:31.496 --rc genhtml_function_coverage=1 00:24:31.496 --rc genhtml_legend=1 00:24:31.496 --rc geninfo_all_blocks=1 00:24:31.496 --rc geninfo_unexecuted_blocks=1 00:24:31.496 00:24:31.496 ' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:31.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.496 --rc genhtml_branch_coverage=1 00:24:31.496 --rc genhtml_function_coverage=1 00:24:31.496 --rc genhtml_legend=1 00:24:31.496 --rc geninfo_all_blocks=1 00:24:31.496 --rc geninfo_unexecuted_blocks=1 00:24:31.496 00:24:31.496 ' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:31.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.496 --rc genhtml_branch_coverage=1 00:24:31.496 --rc genhtml_function_coverage=1 00:24:31.496 --rc genhtml_legend=1 00:24:31.496 --rc geninfo_all_blocks=1 00:24:31.496 --rc geninfo_unexecuted_blocks=1 00:24:31.496 00:24:31.496 ' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:31.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.496 --rc genhtml_branch_coverage=1 00:24:31.496 --rc genhtml_function_coverage=1 00:24:31.496 --rc genhtml_legend=1 00:24:31.496 --rc geninfo_all_blocks=1 00:24:31.496 --rc geninfo_unexecuted_blocks=1 00:24:31.496 00:24:31.496 ' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.496 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.497 22:12:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.649 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:39.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:39.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:39.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:39.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.650 22:12:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:24:39.650 00:24:39.650 --- 10.0.0.2 ping statistics --- 00:24:39.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.650 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:39.650 00:24:39.650 --- 10.0.0.1 ping statistics --- 00:24:39.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.650 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=3571327 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 3571327 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3571327 ']' 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.650 22:12:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.650 [2024-10-12 22:12:57.286479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:39.651 [2024-10-12 22:12:57.286543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.651 [2024-10-12 22:12:57.373894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.651 [2024-10-12 22:12:57.419265] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.651 [2024-10-12 22:12:57.419321] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.651 [2024-10-12 22:12:57.419330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.651 [2024-10-12 22:12:57.419337] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.651 [2024-10-12 22:12:57.419344] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.651 [2024-10-12 22:12:57.419368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.651 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.651 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:39.651 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:39.651 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.651 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 [2024-10-12 22:12:58.166738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 Malloc0 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:39.920 [2024-10-12 22:12:58.238093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3571559 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3571561 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3571563 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3571559 00:24:39.920 22:12:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.920 [2024-10-12 22:12:58.308761] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:39.920 [2024-10-12 22:12:58.308986] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:39.920 [2024-10-12 22:12:58.318792] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:40.908 Initializing NVMe Controllers 00:24:40.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:40.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:40.908 Initialization complete. Launching workers. 00:24:40.908 ======================================================== 00:24:40.908 Latency(us) 00:24:40.908 Device Information : IOPS MiB/s Average min max 00:24:40.908 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 24.00 0.09 41785.47 40825.69 42177.69 00:24:40.908 ======================================================== 00:24:40.908 Total : 24.00 0.09 41785.47 40825.69 42177.69 00:24:40.908 00:24:40.908 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3571561 00:24:41.169 Initializing NVMe Controllers 00:24:41.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:41.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:41.169 Initialization complete. Launching workers. 00:24:41.169 ======================================================== 00:24:41.169 Latency(us) 00:24:41.169 Device Information : IOPS MiB/s Average min max 00:24:41.169 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 24.00 0.09 41795.42 40885.02 41954.87 00:24:41.169 ======================================================== 00:24:41.169 Total : 24.00 0.09 41795.42 40885.02 41954.87 00:24:41.169 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3571563 00:24:41.169 Initializing NVMe Controllers 00:24:41.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:41.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:41.169 Initialization complete. Launching workers. 00:24:41.169 ======================================================== 00:24:41.169 Latency(us) 00:24:41.169 Device Information : IOPS MiB/s Average min max 00:24:41.169 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41635.88 40852.60 42214.06 00:24:41.169 ======================================================== 00:24:41.169 Total : 25.00 0.10 41635.88 40852.60 42214.06 00:24:41.169 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.169 rmmod nvme_tcp 00:24:41.169 rmmod nvme_fabrics 00:24:41.169 rmmod nvme_keyring 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 3571327 ']' 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 3571327 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3571327 ']' 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3571327 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.169 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3571327 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3571327' 00:24:41.430 killing process with pid 3571327 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3571327 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3571327 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:41.430 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.431 22:12:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.977 00:24:43.977 real 0m12.473s 00:24:43.977 user 0m8.006s 00:24:43.977 sys 0m6.502s 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:43.977 ************************************ 00:24:43.977 END TEST nvmf_control_msg_list 00:24:43.977 ************************************ 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.977 22:13:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:43.977 ************************************ 00:24:43.977 START TEST nvmf_wait_for_buf 00:24:43.977 ************************************ 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:43.977 * Looking for test storage... 00:24:43.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.977 --rc genhtml_branch_coverage=1 00:24:43.977 --rc genhtml_function_coverage=1 00:24:43.977 --rc genhtml_legend=1 00:24:43.977 --rc geninfo_all_blocks=1 00:24:43.977 --rc geninfo_unexecuted_blocks=1 00:24:43.977 00:24:43.977 ' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.977 --rc genhtml_branch_coverage=1 00:24:43.977 --rc genhtml_function_coverage=1 00:24:43.977 --rc genhtml_legend=1 00:24:43.977 --rc geninfo_all_blocks=1 00:24:43.977 --rc geninfo_unexecuted_blocks=1 00:24:43.977 00:24:43.977 ' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.977 --rc genhtml_branch_coverage=1 00:24:43.977 --rc genhtml_function_coverage=1 00:24:43.977 --rc genhtml_legend=1 00:24:43.977 --rc geninfo_all_blocks=1 00:24:43.977 --rc geninfo_unexecuted_blocks=1 00:24:43.977 00:24:43.977 ' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:43.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.977 --rc genhtml_branch_coverage=1 00:24:43.977 --rc genhtml_function_coverage=1 00:24:43.977 --rc genhtml_legend=1 00:24:43.977 --rc geninfo_all_blocks=1 00:24:43.977 --rc geninfo_unexecuted_blocks=1 00:24:43.977 00:24:43.977 ' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.977 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.978 22:13:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:52.120 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:52.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:52.120 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:52.120 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:52.120 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:52.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:24:52.120 00:24:52.120 --- 10.0.0.2 ping statistics --- 00:24:52.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.121 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:24:52.121 00:24:52.121 --- 10.0.0.1 ping statistics --- 00:24:52.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.121 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=3576018 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 3576018 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3576018 ']' 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.121 22:13:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.121 [2024-10-12 22:13:09.797586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:52.121 [2024-10-12 22:13:09.797653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.121 [2024-10-12 22:13:09.885091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.121 [2024-10-12 22:13:09.932459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.121 [2024-10-12 22:13:09.932512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.121 [2024-10-12 22:13:09.932525] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.121 [2024-10-12 22:13:09.932532] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.121 [2024-10-12 22:13:09.932538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.121 [2024-10-12 22:13:09.932564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 Malloc0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 [2024-10-12 22:13:10.784299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:52.383 [2024-10-12 22:13:10.820594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.383 22:13:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:52.644 [2024-10-12 22:13:10.906250] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:54.028 Initializing NVMe Controllers 00:24:54.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:54.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:54.028 Initialization complete. Launching workers. 00:24:54.028 ======================================================== 00:24:54.028 Latency(us) 00:24:54.028 Device Information : IOPS MiB/s Average min max 00:24:54.028 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.64 8011.49 63856.19 00:24:54.028 ======================================================== 00:24:54.028 Total : 129.00 16.12 32294.64 8011.49 63856.19 00:24:54.028 00:24:54.028 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:54.028 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.029 rmmod nvme_tcp 00:24:54.029 rmmod nvme_fabrics 00:24:54.029 rmmod nvme_keyring 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 3576018 ']' 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 3576018 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3576018 ']' 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3576018 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.029 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3576018 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3576018' 00:24:54.290 killing process with pid 3576018 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3576018 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3576018 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.290 22:13:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.838 00:24:56.838 real 0m12.772s 00:24:56.838 user 0m5.279s 00:24:56.838 sys 0m6.086s 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:56.838 ************************************ 00:24:56.838 END TEST nvmf_wait_for_buf 00:24:56.838 ************************************ 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:56.838 ************************************ 00:24:56.838 START TEST nvmf_fuzz 00:24:56.838 ************************************ 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:56.838 * Looking for test storage... 00:24:56.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:24:56.838 22:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:56.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.838 --rc genhtml_branch_coverage=1 00:24:56.838 --rc genhtml_function_coverage=1 00:24:56.838 --rc genhtml_legend=1 00:24:56.838 --rc geninfo_all_blocks=1 00:24:56.838 --rc geninfo_unexecuted_blocks=1 00:24:56.838 00:24:56.838 ' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:56.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.838 --rc genhtml_branch_coverage=1 00:24:56.838 --rc genhtml_function_coverage=1 00:24:56.838 --rc genhtml_legend=1 00:24:56.838 --rc geninfo_all_blocks=1 00:24:56.838 --rc geninfo_unexecuted_blocks=1 00:24:56.838 00:24:56.838 ' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:56.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.838 --rc genhtml_branch_coverage=1 00:24:56.838 --rc genhtml_function_coverage=1 00:24:56.838 --rc genhtml_legend=1 00:24:56.838 --rc geninfo_all_blocks=1 00:24:56.838 --rc geninfo_unexecuted_blocks=1 00:24:56.838 00:24:56.838 ' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:56.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.838 --rc genhtml_branch_coverage=1 00:24:56.838 --rc genhtml_function_coverage=1 00:24:56.838 --rc genhtml_legend=1 00:24:56.838 --rc geninfo_all_blocks=1 00:24:56.838 --rc geninfo_unexecuted_blocks=1 00:24:56.838 00:24:56.838 ' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.838 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:56.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:56.839 22:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:04.982 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:04.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.982 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:04.983 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:04.983 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # is_hw=yes 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:04.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:25:04.983 00:25:04.983 --- 10.0.0.2 ping statistics --- 00:25:04.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.983 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:04.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:25:04.983 00:25:04.983 --- 10.0.0.1 ping statistics --- 00:25:04.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.983 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # return 0 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3580709 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3580709 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3580709 ']' 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.983 22:13:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.243 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.243 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:05.243 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.243 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.244 Malloc0 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:05.244 22:13:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:37.370 Fuzzing completed. Shutting down the fuzz application 00:25:37.370 00:25:37.370 Dumping successful admin opcodes: 00:25:37.370 8, 9, 10, 24, 00:25:37.370 Dumping successful io opcodes: 00:25:37.370 0, 9, 00:25:37.370 NS: 0x200003aeff00 I/O qp, Total commands completed: 1037212, total successful commands: 6097, random_seed: 3657429312 00:25:37.370 NS: 0x200003aeff00 admin qp, Total commands completed: 139808, total successful commands: 1132, random_seed: 577081344 00:25:37.370 22:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:37.370 Fuzzing completed. Shutting down the fuzz application 00:25:37.370 00:25:37.370 Dumping successful admin opcodes: 00:25:37.370 24, 00:25:37.370 Dumping successful io opcodes: 00:25:37.370 00:25:37.370 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 149848703 00:25:37.370 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 149923369 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.370 rmmod nvme_tcp 00:25:37.370 rmmod nvme_fabrics 00:25:37.370 rmmod nvme_keyring 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 3580709 ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3580709 ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3580709' 00:25:37.370 killing process with pid 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3580709 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.370 22:13:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:39.918 00:25:39.918 real 0m43.077s 00:25:39.918 user 0m55.798s 00:25:39.918 sys 0m16.852s 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:39.918 ************************************ 00:25:39.918 END TEST nvmf_fuzz 00:25:39.918 ************************************ 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.918 22:13:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:39.918 ************************************ 00:25:39.918 START TEST nvmf_multiconnection 00:25:39.918 ************************************ 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:39.918 * Looking for test storage... 00:25:39.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:39.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.918 --rc genhtml_branch_coverage=1 00:25:39.918 --rc genhtml_function_coverage=1 00:25:39.918 --rc genhtml_legend=1 00:25:39.918 --rc geninfo_all_blocks=1 00:25:39.918 --rc geninfo_unexecuted_blocks=1 00:25:39.918 00:25:39.918 ' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:39.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.918 --rc genhtml_branch_coverage=1 00:25:39.918 --rc genhtml_function_coverage=1 00:25:39.918 --rc genhtml_legend=1 00:25:39.918 --rc geninfo_all_blocks=1 00:25:39.918 --rc geninfo_unexecuted_blocks=1 00:25:39.918 00:25:39.918 ' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:39.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.918 --rc genhtml_branch_coverage=1 00:25:39.918 --rc genhtml_function_coverage=1 00:25:39.918 --rc genhtml_legend=1 00:25:39.918 --rc geninfo_all_blocks=1 00:25:39.918 --rc geninfo_unexecuted_blocks=1 00:25:39.918 00:25:39.918 ' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:39.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.918 --rc genhtml_branch_coverage=1 00:25:39.918 --rc genhtml_function_coverage=1 00:25:39.918 --rc genhtml_legend=1 00:25:39.918 --rc geninfo_all_blocks=1 00:25:39.918 --rc geninfo_unexecuted_blocks=1 00:25:39.918 00:25:39.918 ' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:25:39.918 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.919 22:13:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.063 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ up == up ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.063 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # is_hw=yes 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.063 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:25:48.064 00:25:48.064 --- 10.0.0.2 ping statistics --- 00:25:48.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.064 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:25:48.064 00:25:48.064 --- 10.0.0.1 ping statistics --- 00:25:48.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.064 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # return 0 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=3591323 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 3591323 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3591323 ']' 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.064 22:14:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.064 [2024-10-12 22:14:05.859995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:48.064 [2024-10-12 22:14:05.860066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.064 [2024-10-12 22:14:05.951868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.064 [2024-10-12 22:14:06.001156] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.064 [2024-10-12 22:14:06.001205] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.064 [2024-10-12 22:14:06.001214] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.064 [2024-10-12 22:14:06.001221] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.064 [2024-10-12 22:14:06.001228] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.064 [2024-10-12 22:14:06.001376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.064 [2024-10-12 22:14:06.001533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.064 [2024-10-12 22:14:06.001688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.064 [2024-10-12 22:14:06.001688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 [2024-10-12 22:14:06.742469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 Malloc1 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.325 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.326 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.326 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.326 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 [2024-10-12 22:14:06.815973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 Malloc2 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 Malloc3 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:48.587 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 Malloc4 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 Malloc5 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.588 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.849 Malloc6 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.849 Malloc7 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:48.849 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 Malloc8 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 Malloc9 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.850 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 Malloc10 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 Malloc11 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.111 22:14:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:50.494 22:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:50.494 22:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:50.494 22:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.494 22:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:50.494 22:14:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.037 22:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:54.424 22:14:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:54.424 22:14:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:54.424 22:14:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.424 22:14:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:54.424 22:14:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.334 22:14:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:57.716 22:14:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:57.716 22:14:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:57.716 22:14:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.716 22:14:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:57.716 22:14:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.267 22:14:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:01.654 22:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:01.654 22:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.654 22:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.654 22:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.654 22:14:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.570 22:14:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:05.484 22:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:05.484 22:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:05.484 22:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.484 22:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:05.484 22:14:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.400 22:14:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:09.315 22:14:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:09.315 22:14:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.315 22:14:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.315 22:14:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.315 22:14:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.236 22:14:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:12.712 22:14:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:12.712 22:14:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:12.712 22:14:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.712 22:14:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:12.712 22:14:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.628 22:14:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:16.542 22:14:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:16.542 22:14:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:16.542 22:14:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.542 22:14:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:16.542 22:14:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.457 22:14:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:20.370 22:14:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:20.370 22:14:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:20.370 22:14:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.370 22:14:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:20.370 22:14:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.283 22:14:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:24.195 22:14:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:24.195 22:14:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:24.195 22:14:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.195 22:14:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:24.195 22:14:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.108 22:14:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:28.021 22:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:28.021 22:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:28.021 22:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.021 22:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:28.021 22:14:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:30.564 22:14:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:30.564 [global] 00:26:30.564 thread=1 00:26:30.564 invalidate=1 00:26:30.564 rw=read 00:26:30.564 time_based=1 00:26:30.564 runtime=10 00:26:30.564 ioengine=libaio 00:26:30.564 direct=1 00:26:30.564 bs=262144 00:26:30.564 iodepth=64 00:26:30.564 norandommap=1 00:26:30.564 numjobs=1 00:26:30.564 00:26:30.564 [job0] 00:26:30.564 filename=/dev/nvme0n1 00:26:30.564 [job1] 00:26:30.564 filename=/dev/nvme10n1 00:26:30.564 [job2] 00:26:30.564 filename=/dev/nvme1n1 00:26:30.564 [job3] 00:26:30.564 filename=/dev/nvme2n1 00:26:30.564 [job4] 00:26:30.564 filename=/dev/nvme3n1 00:26:30.564 [job5] 00:26:30.564 filename=/dev/nvme4n1 00:26:30.564 [job6] 00:26:30.564 filename=/dev/nvme5n1 00:26:30.564 [job7] 00:26:30.564 filename=/dev/nvme6n1 00:26:30.564 [job8] 00:26:30.564 filename=/dev/nvme7n1 00:26:30.564 [job9] 00:26:30.564 filename=/dev/nvme8n1 00:26:30.564 [job10] 00:26:30.564 filename=/dev/nvme9n1 00:26:30.564 Could not set queue depth (nvme0n1) 00:26:30.564 Could not set queue depth (nvme10n1) 00:26:30.564 Could not set queue depth (nvme1n1) 00:26:30.564 Could not set queue depth (nvme2n1) 00:26:30.564 Could not set queue depth (nvme3n1) 00:26:30.564 Could not set queue depth (nvme4n1) 00:26:30.564 Could not set queue depth (nvme5n1) 00:26:30.564 Could not set queue depth (nvme6n1) 00:26:30.564 Could not set queue depth (nvme7n1) 00:26:30.564 Could not set queue depth (nvme8n1) 00:26:30.564 Could not set queue depth (nvme9n1) 00:26:30.825 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:30.825 fio-3.35 00:26:30.825 Starting 11 threads 00:26:43.058 00:26:43.058 job0: (groupid=0, jobs=1): err= 0: pid=3599942: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=460, BW=115MiB/s (121MB/s)(1166MiB/10120msec) 00:26:43.058 slat (usec): min=9, max=208192, avg=2139.69, stdev=8280.06 00:26:43.058 clat (msec): min=16, max=715, avg=136.40, stdev=122.22 00:26:43.058 lat (msec): min=18, max=715, avg=138.54, stdev=124.02 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 39], 00:26:43.058 | 30.00th=[ 58], 40.00th=[ 79], 50.00th=[ 103], 60.00th=[ 127], 00:26:43.058 | 70.00th=[ 150], 80.00th=[ 184], 90.00th=[ 330], 95.00th=[ 397], 00:26:43.058 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 676], 99.95th=[ 718], 00:26:43.058 | 99.99th=[ 718] 00:26:43.058 bw ( KiB/s): min=25600, max=387072, per=16.09%, avg=117811.20, stdev=95588.30, samples=20 00:26:43.058 iops : min= 100, max= 1512, avg=460.20, stdev=373.39, samples=20 00:26:43.058 lat (msec) : 20=0.17%, 50=27.16%, 100=22.06%, 250=36.06%, 500=12.07% 00:26:43.058 lat (msec) : 750=2.49% 00:26:43.058 cpu : usr=0.15%, sys=1.73%, ctx=767, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=4665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job1: (groupid=0, jobs=1): err= 0: pid=3599957: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=268, BW=67.2MiB/s (70.4MB/s)(679MiB/10111msec) 00:26:43.058 slat (usec): min=12, max=209376, avg=2899.44, stdev=11370.95 00:26:43.058 clat (msec): min=14, max=1047, avg=234.99, stdev=181.13 00:26:43.058 lat (msec): min=14, max=1047, avg=237.89, stdev=182.78 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 97], 20.00th=[ 133], 00:26:43.058 | 30.00th=[ 144], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 174], 00:26:43.058 | 70.00th=[ 226], 80.00th=[ 355], 90.00th=[ 523], 95.00th=[ 584], 00:26:43.058 | 99.00th=[ 961], 99.50th=[ 1003], 99.90th=[ 1045], 99.95th=[ 1045], 00:26:43.058 | 99.99th=[ 1045] 00:26:43.058 bw ( KiB/s): min=20992, max=137216, per=9.28%, avg=67942.40, stdev=36687.71, samples=20 00:26:43.058 iops : min= 82, max= 536, avg=265.40, stdev=143.31, samples=20 00:26:43.058 lat (msec) : 20=0.81%, 50=6.29%, 100=3.13%, 250=61.80%, 500=16.12% 00:26:43.058 lat (msec) : 750=9.75%, 1000=1.47%, 2000=0.63% 00:26:43.058 cpu : usr=0.18%, sys=1.04%, ctx=572, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job2: (groupid=0, jobs=1): err= 0: pid=3599978: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=439, BW=110MiB/s (115MB/s)(1111MiB/10102msec) 00:26:43.058 slat (usec): min=9, max=368670, avg=2175.22, stdev=9569.24 00:26:43.058 clat (msec): min=11, max=690, avg=143.04, stdev=120.57 00:26:43.058 lat (msec): min=11, max=738, avg=145.22, stdev=121.86 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 00:26:43.058 | 30.00th=[ 53], 40.00th=[ 64], 50.00th=[ 96], 60.00th=[ 165], 00:26:43.058 | 70.00th=[ 188], 80.00th=[ 211], 90.00th=[ 313], 95.00th=[ 388], 00:26:43.058 | 99.00th=[ 550], 99.50th=[ 651], 99.90th=[ 684], 99.95th=[ 693], 00:26:43.058 | 99.99th=[ 693] 00:26:43.058 bw ( KiB/s): min=32768, max=313856, per=15.31%, avg=112128.00, stdev=94846.13, samples=20 00:26:43.058 iops : min= 128, max= 1226, avg=438.00, stdev=370.49, samples=20 00:26:43.058 lat (msec) : 20=0.36%, 50=26.10%, 100=24.21%, 250=35.98%, 500=11.61% 00:26:43.058 lat (msec) : 750=1.73% 00:26:43.058 cpu : usr=0.14%, sys=1.62%, ctx=728, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job3: (groupid=0, jobs=1): err= 0: pid=3599982: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=245, BW=61.4MiB/s (64.4MB/s)(617MiB/10044msec) 00:26:43.058 slat (usec): min=12, max=102429, avg=3565.42, stdev=11026.74 00:26:43.058 clat (msec): min=30, max=744, avg=256.67, stdev=118.42 00:26:43.058 lat (msec): min=30, max=744, avg=260.23, stdev=119.50 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 63], 5.00th=[ 120], 10.00th=[ 133], 20.00th=[ 146], 00:26:43.058 | 30.00th=[ 169], 40.00th=[ 197], 50.00th=[ 226], 60.00th=[ 288], 00:26:43.058 | 70.00th=[ 330], 80.00th=[ 368], 90.00th=[ 409], 95.00th=[ 435], 00:26:43.058 | 99.00th=[ 659], 99.50th=[ 701], 99.90th=[ 743], 99.95th=[ 743], 00:26:43.058 | 99.99th=[ 743] 00:26:43.058 bw ( KiB/s): min=29696, max=124416, per=8.41%, avg=61568.00, stdev=26048.72, samples=20 00:26:43.058 iops : min= 116, max= 486, avg=240.50, stdev=101.75, samples=20 00:26:43.058 lat (msec) : 50=0.61%, 100=1.70%, 250=52.80%, 500=42.54%, 750=2.35% 00:26:43.058 cpu : usr=0.11%, sys=0.97%, ctx=447, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.4% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job4: (groupid=0, jobs=1): err= 0: pid=3599985: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=203, BW=50.8MiB/s (53.3MB/s)(513MiB/10096msec) 00:26:43.058 slat (usec): min=12, max=120026, avg=3322.33, stdev=12538.53 00:26:43.058 clat (msec): min=12, max=1037, avg=311.19, stdev=182.47 00:26:43.058 lat (msec): min=14, max=1051, avg=314.51, stdev=183.38 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 63], 5.00th=[ 116], 10.00th=[ 146], 20.00th=[ 186], 00:26:43.058 | 30.00th=[ 215], 40.00th=[ 255], 50.00th=[ 292], 60.00th=[ 313], 00:26:43.058 | 70.00th=[ 338], 80.00th=[ 363], 90.00th=[ 464], 95.00th=[ 785], 00:26:43.058 | 99.00th=[ 1003], 99.50th=[ 1011], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:43.058 | 99.99th=[ 1036] 00:26:43.058 bw ( KiB/s): min=24576, max=88576, per=6.95%, avg=50867.20, stdev=14366.07, samples=20 00:26:43.058 iops : min= 96, max= 346, avg=198.70, stdev=56.12, samples=20 00:26:43.058 lat (msec) : 20=0.20%, 50=0.54%, 100=2.58%, 250=35.45%, 500=53.10% 00:26:43.058 lat (msec) : 750=2.00%, 1000=4.97%, 2000=1.17% 00:26:43.058 cpu : usr=0.06%, sys=0.75%, ctx=444, majf=0, minf=3534 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job5: (groupid=0, jobs=1): err= 0: pid=3600012: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=210, BW=52.5MiB/s (55.1MB/s)(530MiB/10091msec) 00:26:43.058 slat (usec): min=11, max=238582, avg=3146.56, stdev=14378.46 00:26:43.058 clat (msec): min=3, max=1143, avg=300.96, stdev=280.10 00:26:43.058 lat (msec): min=3, max=1143, avg=304.10, stdev=282.66 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 19], 20.00th=[ 39], 00:26:43.058 | 30.00th=[ 74], 40.00th=[ 167], 50.00th=[ 222], 60.00th=[ 288], 00:26:43.058 | 70.00th=[ 401], 80.00th=[ 567], 90.00th=[ 751], 95.00th=[ 885], 00:26:43.058 | 99.00th=[ 986], 99.50th=[ 1036], 99.90th=[ 1133], 99.95th=[ 1150], 00:26:43.058 | 99.99th=[ 1150] 00:26:43.058 bw ( KiB/s): min=12800, max=287232, per=7.20%, avg=52684.80, stdev=60099.77, samples=20 00:26:43.058 iops : min= 50, max= 1122, avg=205.80, stdev=234.76, samples=20 00:26:43.058 lat (msec) : 4=0.09%, 10=2.31%, 20=9.29%, 50=11.83%, 100=11.79% 00:26:43.058 lat (msec) : 250=20.04%, 500=21.17%, 750=13.72%, 1000=9.05%, 2000=0.71% 00:26:43.058 cpu : usr=0.07%, sys=0.85%, ctx=469, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job6: (groupid=0, jobs=1): err= 0: pid=3600024: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=217, BW=54.4MiB/s (57.0MB/s)(550MiB/10115msec) 00:26:43.058 slat (usec): min=12, max=754801, avg=3344.43, stdev=24373.98 00:26:43.058 clat (msec): min=13, max=1145, avg=290.33, stdev=255.19 00:26:43.058 lat (msec): min=15, max=1520, avg=293.68, stdev=258.06 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 35], 5.00th=[ 52], 10.00th=[ 68], 20.00th=[ 94], 00:26:43.058 | 30.00th=[ 117], 40.00th=[ 146], 50.00th=[ 182], 60.00th=[ 288], 00:26:43.058 | 70.00th=[ 342], 80.00th=[ 439], 90.00th=[ 667], 95.00th=[ 869], 00:26:43.058 | 99.00th=[ 1116], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:26:43.058 | 99.99th=[ 1150] 00:26:43.058 bw ( KiB/s): min= 1536, max=115200, per=7.47%, avg=54707.20, stdev=31765.84, samples=20 00:26:43.058 iops : min= 6, max= 450, avg=213.70, stdev=124.09, samples=20 00:26:43.058 lat (msec) : 20=0.18%, 50=4.50%, 100=19.58%, 250=31.85%, 500=26.99% 00:26:43.058 lat (msec) : 750=8.91%, 1000=5.22%, 2000=2.77% 00:26:43.058 cpu : usr=0.03%, sys=0.84%, ctx=458, majf=0, minf=4097 00:26:43.058 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:43.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.058 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.058 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.058 job7: (groupid=0, jobs=1): err= 0: pid=3600034: Sat Oct 12 22:14:59 2024 00:26:43.058 read: IOPS=334, BW=83.5MiB/s (87.6MB/s)(839MiB/10040msec) 00:26:43.058 slat (usec): min=12, max=139727, avg=2648.03, stdev=9187.45 00:26:43.058 clat (msec): min=15, max=649, avg=188.73, stdev=100.29 00:26:43.058 lat (msec): min=15, max=666, avg=191.37, stdev=101.65 00:26:43.058 clat percentiles (msec): 00:26:43.058 | 1.00th=[ 63], 5.00th=[ 75], 10.00th=[ 82], 20.00th=[ 90], 00:26:43.058 | 30.00th=[ 106], 40.00th=[ 140], 50.00th=[ 174], 60.00th=[ 205], 00:26:43.058 | 70.00th=[ 239], 80.00th=[ 279], 90.00th=[ 321], 95.00th=[ 368], 00:26:43.059 | 99.00th=[ 447], 99.50th=[ 481], 99.90th=[ 651], 99.95th=[ 651], 00:26:43.059 | 99.99th=[ 651] 00:26:43.059 bw ( KiB/s): min=33792, max=173056, per=11.51%, avg=84249.60, stdev=43552.00, samples=20 00:26:43.059 iops : min= 132, max= 676, avg=329.10, stdev=170.13, samples=20 00:26:43.059 lat (msec) : 20=0.12%, 50=0.39%, 100=27.58%, 250=44.13%, 500=27.46% 00:26:43.059 lat (msec) : 750=0.33% 00:26:43.059 cpu : usr=0.10%, sys=1.07%, ctx=566, majf=0, minf=4097 00:26:43.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:43.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.059 issued rwts: total=3354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.059 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.059 job8: (groupid=0, jobs=1): err= 0: pid=3600062: Sat Oct 12 22:14:59 2024 00:26:43.059 read: IOPS=140, BW=35.1MiB/s (36.8MB/s)(354MiB/10092msec) 00:26:43.059 slat (usec): min=9, max=699794, avg=5483.87, stdev=30726.77 00:26:43.059 clat (msec): min=20, max=1102, avg=450.56, stdev=252.99 00:26:43.059 lat (msec): min=20, max=1574, avg=456.04, stdev=257.09 00:26:43.059 clat percentiles (msec): 00:26:43.059 | 1.00th=[ 31], 5.00th=[ 138], 10.00th=[ 176], 20.00th=[ 266], 00:26:43.059 | 30.00th=[ 296], 40.00th=[ 338], 50.00th=[ 376], 60.00th=[ 405], 00:26:43.059 | 70.00th=[ 558], 80.00th=[ 684], 90.00th=[ 827], 95.00th=[ 969], 00:26:43.059 | 99.00th=[ 1070], 99.50th=[ 1070], 99.90th=[ 1099], 99.95th=[ 1099], 00:26:43.059 | 99.99th=[ 1099] 00:26:43.059 bw ( KiB/s): min= 7680, max=68608, per=4.73%, avg=34606.75, stdev=15633.37, samples=20 00:26:43.059 iops : min= 30, max= 268, avg=135.15, stdev=61.05, samples=20 00:26:43.059 lat (msec) : 50=1.98%, 100=1.84%, 250=13.29%, 500=50.04%, 750=15.55% 00:26:43.059 lat (msec) : 1000=12.72%, 2000=4.59% 00:26:43.059 cpu : usr=0.07%, sys=0.58%, ctx=268, majf=0, minf=4097 00:26:43.059 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:43.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.059 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.059 issued rwts: total=1415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.059 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.059 job9: (groupid=0, jobs=1): err= 0: pid=3600074: Sat Oct 12 22:14:59 2024 00:26:43.059 read: IOPS=136, BW=34.2MiB/s (35.9MB/s)(346MiB/10095msec) 00:26:43.059 slat (usec): min=12, max=332772, avg=5521.02, stdev=21485.04 00:26:43.059 clat (msec): min=13, max=1329, avg=461.38, stdev=242.53 00:26:43.059 lat (msec): min=15, max=1329, avg=466.90, stdev=244.94 00:26:43.059 clat percentiles (msec): 00:26:43.059 | 1.00th=[ 57], 5.00th=[ 157], 10.00th=[ 184], 20.00th=[ 271], 00:26:43.059 | 30.00th=[ 300], 40.00th=[ 330], 50.00th=[ 372], 60.00th=[ 443], 00:26:43.059 | 70.00th=[ 600], 80.00th=[ 709], 90.00th=[ 835], 95.00th=[ 902], 00:26:43.059 | 99.00th=[ 1003], 99.50th=[ 1045], 99.90th=[ 1045], 99.95th=[ 1334], 00:26:43.059 | 99.99th=[ 1334] 00:26:43.059 bw ( KiB/s): min=13824, max=69120, per=4.61%, avg=33745.00, stdev=16135.00, samples=20 00:26:43.059 iops : min= 54, max= 270, avg=131.80, stdev=63.02, samples=20 00:26:43.059 lat (msec) : 20=0.36%, 50=0.29%, 100=0.65%, 250=14.40%, 500=47.76% 00:26:43.059 lat (msec) : 750=18.60%, 1000=17.00%, 2000=0.94% 00:26:43.059 cpu : usr=0.03%, sys=0.53%, ctx=274, majf=0, minf=4097 00:26:43.059 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:26:43.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.059 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.059 issued rwts: total=1382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.059 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.059 job10: (groupid=0, jobs=1): err= 0: pid=3600085: Sat Oct 12 22:14:59 2024 00:26:43.059 read: IOPS=210, BW=52.5MiB/s (55.1MB/s)(532MiB/10119msec) 00:26:43.059 slat (usec): min=8, max=795478, avg=4020.23, stdev=26036.94 00:26:43.059 clat (msec): min=15, max=1199, avg=300.11, stdev=261.13 00:26:43.059 lat (msec): min=15, max=1423, avg=304.13, stdev=263.60 00:26:43.059 clat percentiles (msec): 00:26:43.059 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 54], 20.00th=[ 92], 00:26:43.059 | 30.00th=[ 150], 40.00th=[ 176], 50.00th=[ 203], 60.00th=[ 296], 00:26:43.059 | 70.00th=[ 347], 80.00th=[ 439], 90.00th=[ 667], 95.00th=[ 919], 00:26:43.059 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:43.059 | 99.99th=[ 1200] 00:26:43.059 bw ( KiB/s): min= 5632, max=145408, per=7.21%, avg=52787.20, stdev=36416.24, samples=20 00:26:43.059 iops : min= 22, max= 568, avg=206.20, stdev=142.25, samples=20 00:26:43.059 lat (msec) : 20=0.42%, 50=8.28%, 100=13.03%, 250=34.34%, 500=26.81% 00:26:43.059 lat (msec) : 750=8.56%, 1000=4.89%, 2000=3.67% 00:26:43.059 cpu : usr=0.04%, sys=0.81%, ctx=466, majf=0, minf=4097 00:26:43.059 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:43.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.059 issued rwts: total=2126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.059 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.059 00:26:43.059 Run status group 0 (all jobs): 00:26:43.059 READ: bw=715MiB/s (750MB/s), 34.2MiB/s-115MiB/s (35.9MB/s-121MB/s), io=7236MiB (7587MB), run=10040-10120msec 00:26:43.059 00:26:43.059 Disk stats (read/write): 00:26:43.059 nvme0n1: ios=9248/0, merge=0/0, ticks=1233371/0, in_queue=1233371, util=96.34% 00:26:43.059 nvme10n1: ios=5364/0, merge=0/0, ticks=1247022/0, in_queue=1247022, util=96.53% 00:26:43.059 nvme1n1: ios=8886/0, merge=0/0, ticks=1259989/0, in_queue=1259989, util=97.00% 00:26:43.059 nvme2n1: ios=4635/0, merge=0/0, ticks=1225758/0, in_queue=1225758, util=97.06% 00:26:43.059 nvme3n1: ios=3883/0, merge=0/0, ticks=1221105/0, in_queue=1221105, util=97.27% 00:26:43.059 nvme4n1: ios=4042/0, merge=0/0, ticks=1222959/0, in_queue=1222959, util=97.72% 00:26:43.059 nvme5n1: ios=4338/0, merge=0/0, ticks=1246914/0, in_queue=1246914, util=97.94% 00:26:43.059 nvme6n1: ios=6366/0, merge=0/0, ticks=1228300/0, in_queue=1228300, util=98.03% 00:26:43.059 nvme7n1: ios=2777/0, merge=0/0, ticks=1256854/0, in_queue=1256854, util=98.61% 00:26:43.059 nvme8n1: ios=2589/0, merge=0/0, ticks=1217049/0, in_queue=1217049, util=98.86% 00:26:43.059 nvme9n1: ios=4156/0, merge=0/0, ticks=1230393/0, in_queue=1230393, util=99.12% 00:26:43.059 22:14:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:43.059 [global] 00:26:43.059 thread=1 00:26:43.059 invalidate=1 00:26:43.059 rw=randwrite 00:26:43.059 time_based=1 00:26:43.059 runtime=10 00:26:43.059 ioengine=libaio 00:26:43.059 direct=1 00:26:43.059 bs=262144 00:26:43.059 iodepth=64 00:26:43.059 norandommap=1 00:26:43.059 numjobs=1 00:26:43.059 00:26:43.059 [job0] 00:26:43.059 filename=/dev/nvme0n1 00:26:43.059 [job1] 00:26:43.059 filename=/dev/nvme10n1 00:26:43.059 [job2] 00:26:43.059 filename=/dev/nvme1n1 00:26:43.059 [job3] 00:26:43.059 filename=/dev/nvme2n1 00:26:43.059 [job4] 00:26:43.059 filename=/dev/nvme3n1 00:26:43.059 [job5] 00:26:43.059 filename=/dev/nvme4n1 00:26:43.059 [job6] 00:26:43.059 filename=/dev/nvme5n1 00:26:43.059 [job7] 00:26:43.059 filename=/dev/nvme6n1 00:26:43.059 [job8] 00:26:43.059 filename=/dev/nvme7n1 00:26:43.059 [job9] 00:26:43.059 filename=/dev/nvme8n1 00:26:43.059 [job10] 00:26:43.059 filename=/dev/nvme9n1 00:26:43.059 Could not set queue depth (nvme0n1) 00:26:43.059 Could not set queue depth (nvme10n1) 00:26:43.059 Could not set queue depth (nvme1n1) 00:26:43.059 Could not set queue depth (nvme2n1) 00:26:43.059 Could not set queue depth (nvme3n1) 00:26:43.059 Could not set queue depth (nvme4n1) 00:26:43.059 Could not set queue depth (nvme5n1) 00:26:43.059 Could not set queue depth (nvme6n1) 00:26:43.059 Could not set queue depth (nvme7n1) 00:26:43.059 Could not set queue depth (nvme8n1) 00:26:43.059 Could not set queue depth (nvme9n1) 00:26:43.059 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.059 fio-3.35 00:26:43.059 Starting 11 threads 00:26:53.060 00:26:53.061 job0: (groupid=0, jobs=1): err= 0: pid=3601542: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=345, BW=86.5MiB/s (90.7MB/s)(879MiB/10159msec); 0 zone resets 00:26:53.061 slat (usec): min=28, max=74855, avg=2390.68, stdev=6120.99 00:26:53.061 clat (usec): min=1299, max=576257, avg=182553.64, stdev=123834.85 00:26:53.061 lat (usec): min=1358, max=601357, avg=184944.32, stdev=125442.51 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 16], 20.00th=[ 70], 00:26:53.061 | 30.00th=[ 110], 40.00th=[ 148], 50.00th=[ 159], 60.00th=[ 186], 00:26:53.061 | 70.00th=[ 275], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 372], 00:26:53.061 | 99.00th=[ 443], 99.50th=[ 477], 99.90th=[ 575], 99.95th=[ 575], 00:26:53.061 | 99.99th=[ 575] 00:26:53.061 bw ( KiB/s): min=41472, max=302592, per=7.61%, avg=88332.00, stdev=59987.30, samples=20 00:26:53.061 iops : min= 162, max= 1182, avg=345.00, stdev=234.30, samples=20 00:26:53.061 lat (msec) : 2=0.20%, 4=2.13%, 10=5.24%, 20=5.49%, 50=5.15% 00:26:53.061 lat (msec) : 100=8.42%, 250=40.47%, 500=32.44%, 750=0.46% 00:26:53.061 cpu : usr=0.79%, sys=1.25%, ctx=1746, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,3514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job1: (groupid=0, jobs=1): err= 0: pid=3601569: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=382, BW=95.6MiB/s (100MB/s)(965MiB/10090msec); 0 zone resets 00:26:53.061 slat (usec): min=23, max=117568, avg=2507.44, stdev=6174.85 00:26:53.061 clat (msec): min=2, max=437, avg=164.77, stdev=101.31 00:26:53.061 lat (msec): min=2, max=437, avg=167.28, stdev=102.76 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 11], 5.00th=[ 58], 10.00th=[ 100], 20.00th=[ 107], 00:26:53.061 | 30.00th=[ 113], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 128], 00:26:53.061 | 70.00th=[ 134], 80.00th=[ 253], 90.00th=[ 359], 95.00th=[ 380], 00:26:53.061 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:26:53.061 | 99.99th=[ 439] 00:26:53.061 bw ( KiB/s): min=36864, max=203776, per=8.38%, avg=97183.40, stdev=51056.02, samples=20 00:26:53.061 iops : min= 144, max= 796, avg=379.60, stdev=199.46, samples=20 00:26:53.061 lat (msec) : 4=0.16%, 10=0.75%, 20=1.50%, 50=2.18%, 100=6.37% 00:26:53.061 lat (msec) : 250=68.96%, 500=20.08% 00:26:53.061 cpu : usr=1.07%, sys=1.17%, ctx=1158, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,3859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job2: (groupid=0, jobs=1): err= 0: pid=3601581: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=231, BW=58.0MiB/s (60.8MB/s)(589MiB/10157msec); 0 zone resets 00:26:53.061 slat (usec): min=29, max=132440, avg=4010.34, stdev=8635.15 00:26:53.061 clat (msec): min=27, max=568, avg=271.82, stdev=102.45 00:26:53.061 lat (msec): min=27, max=568, avg=275.83, stdev=103.88 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 32], 5.00th=[ 86], 10.00th=[ 150], 20.00th=[ 161], 00:26:53.061 | 30.00th=[ 184], 40.00th=[ 257], 50.00th=[ 309], 60.00th=[ 330], 00:26:53.061 | 70.00th=[ 342], 80.00th=[ 359], 90.00th=[ 380], 95.00th=[ 405], 00:26:53.061 | 99.00th=[ 443], 99.50th=[ 493], 99.90th=[ 542], 99.95th=[ 567], 00:26:53.061 | 99.99th=[ 567] 00:26:53.061 bw ( KiB/s): min=36937, max=115200, per=5.06%, avg=58653.25, stdev=22812.47, samples=20 00:26:53.061 iops : min= 144, max= 450, avg=229.10, stdev=89.13, samples=20 00:26:53.061 lat (msec) : 50=2.63%, 100=3.06%, 250=32.70%, 500=61.19%, 750=0.42% 00:26:53.061 cpu : usr=0.59%, sys=0.77%, ctx=763, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,2355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job3: (groupid=0, jobs=1): err= 0: pid=3601586: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=411, BW=103MiB/s (108MB/s)(1045MiB/10158msec); 0 zone resets 00:26:53.061 slat (usec): min=17, max=113184, avg=2365.70, stdev=6571.55 00:26:53.061 clat (msec): min=23, max=560, avg=153.14, stdev=140.68 00:26:53.061 lat (msec): min=23, max=560, avg=155.51, stdev=142.70 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:26:53.061 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 127], 00:26:53.061 | 70.00th=[ 241], 80.00th=[ 342], 90.00th=[ 372], 95.00th=[ 401], 00:26:53.061 | 99.00th=[ 435], 99.50th=[ 472], 99.90th=[ 535], 99.95th=[ 535], 00:26:53.061 | 99.99th=[ 558] 00:26:53.061 bw ( KiB/s): min=38912, max=351744, per=9.08%, avg=105348.50, stdev=114173.79, samples=20 00:26:53.061 iops : min= 152, max= 1374, avg=411.50, stdev=446.00, samples=20 00:26:53.061 lat (msec) : 50=49.17%, 100=7.82%, 250=13.71%, 500=29.05%, 750=0.24% 00:26:53.061 cpu : usr=0.73%, sys=1.14%, ctx=1074, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,4179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job4: (groupid=0, jobs=1): err= 0: pid=3601587: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=297, BW=74.4MiB/s (78.0MB/s)(756MiB/10158msec); 0 zone resets 00:26:53.061 slat (usec): min=22, max=46069, avg=3193.21, stdev=6526.43 00:26:53.061 clat (msec): min=10, max=557, avg=211.77, stdev=100.59 00:26:53.061 lat (msec): min=10, max=557, avg=214.96, stdev=101.96 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 41], 5.00th=[ 65], 10.00th=[ 112], 20.00th=[ 148], 00:26:53.061 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 169], 00:26:53.061 | 70.00th=[ 279], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 384], 00:26:53.061 | 99.00th=[ 409], 99.50th=[ 456], 99.90th=[ 531], 99.95th=[ 558], 00:26:53.061 | 99.99th=[ 558] 00:26:53.061 bw ( KiB/s): min=38912, max=174428, per=6.53%, avg=75767.80, stdev=36109.70, samples=20 00:26:53.061 iops : min= 152, max= 681, avg=295.95, stdev=141.00, samples=20 00:26:53.061 lat (msec) : 20=0.40%, 50=0.96%, 100=7.08%, 250=59.15%, 500=32.09% 00:26:53.061 lat (msec) : 750=0.33% 00:26:53.061 cpu : usr=0.67%, sys=0.83%, ctx=828, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,3023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job5: (groupid=0, jobs=1): err= 0: pid=3601588: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=626, BW=157MiB/s (164MB/s)(1576MiB/10068msec); 0 zone resets 00:26:53.061 slat (usec): min=24, max=25739, avg=1545.16, stdev=2713.67 00:26:53.061 clat (msec): min=27, max=230, avg=100.63, stdev=18.40 00:26:53.061 lat (msec): min=27, max=234, avg=102.18, stdev=18.38 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 70], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 87], 00:26:53.061 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 105], 00:26:53.061 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 110], 95.00th=[ 111], 00:26:53.061 | 99.00th=[ 194], 99.50th=[ 220], 99.90th=[ 228], 99.95th=[ 230], 00:26:53.061 | 99.99th=[ 232] 00:26:53.061 bw ( KiB/s): min=93883, max=193536, per=13.77%, avg=159778.95, stdev=21616.25, samples=20 00:26:53.061 iops : min= 366, max= 756, avg=624.10, stdev=84.56, samples=20 00:26:53.061 lat (msec) : 50=0.19%, 100=43.81%, 250=56.00% 00:26:53.061 cpu : usr=1.44%, sys=1.58%, ctx=1614, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,6304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job6: (groupid=0, jobs=1): err= 0: pid=3601589: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=632, BW=158MiB/s (166MB/s)(1592MiB/10066msec); 0 zone resets 00:26:53.061 slat (usec): min=27, max=54106, avg=1566.21, stdev=2806.69 00:26:53.061 clat (msec): min=43, max=249, avg=99.60, stdev=19.16 00:26:53.061 lat (msec): min=47, max=249, avg=101.17, stdev=19.24 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 61], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 85], 00:26:53.061 | 30.00th=[ 93], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 105], 00:26:53.061 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 110], 95.00th=[ 111], 00:26:53.061 | 99.00th=[ 194], 99.50th=[ 211], 99.90th=[ 236], 99.95th=[ 236], 00:26:53.061 | 99.99th=[ 249] 00:26:53.061 bw ( KiB/s): min=88064, max=207872, per=13.91%, avg=161356.80, stdev=25153.78, samples=20 00:26:53.061 iops : min= 344, max= 812, avg=630.30, stdev=98.26, samples=20 00:26:53.061 lat (msec) : 50=0.42%, 100=46.65%, 250=52.92% 00:26:53.061 cpu : usr=1.39%, sys=2.13%, ctx=1581, majf=0, minf=1 00:26:53.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:53.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.061 issued rwts: total=0,6366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.061 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.061 job7: (groupid=0, jobs=1): err= 0: pid=3601596: Sat Oct 12 22:15:10 2024 00:26:53.061 write: IOPS=295, BW=74.0MiB/s (77.6MB/s)(750MiB/10133msec); 0 zone resets 00:26:53.061 slat (usec): min=25, max=30926, avg=3201.79, stdev=6405.18 00:26:53.061 clat (msec): min=9, max=514, avg=212.96, stdev=100.17 00:26:53.061 lat (msec): min=9, max=514, avg=216.16, stdev=101.39 00:26:53.061 clat percentiles (msec): 00:26:53.061 | 1.00th=[ 41], 5.00th=[ 66], 10.00th=[ 112], 20.00th=[ 148], 00:26:53.061 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 182], 00:26:53.061 | 70.00th=[ 296], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 388], 00:26:53.061 | 99.00th=[ 418], 99.50th=[ 435], 99.90th=[ 514], 99.95th=[ 514], 00:26:53.061 | 99.99th=[ 514] 00:26:53.061 bw ( KiB/s): min=40448, max=171520, per=6.48%, avg=75161.60, stdev=35373.70, samples=20 00:26:53.061 iops : min= 158, max= 670, avg=293.60, stdev=138.18, samples=20 00:26:53.061 lat (msec) : 10=0.13%, 20=0.27%, 50=0.93%, 100=7.27%, 250=57.15% 00:26:53.061 lat (msec) : 500=34.14%, 750=0.10% 00:26:53.062 cpu : usr=0.67%, sys=0.79%, ctx=789, majf=0, minf=1 00:26:53.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:53.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.062 issued rwts: total=0,2999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.062 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.062 job8: (groupid=0, jobs=1): err= 0: pid=3601599: Sat Oct 12 22:15:10 2024 00:26:53.062 write: IOPS=469, BW=117MiB/s (123MB/s)(1184MiB/10092msec); 0 zone resets 00:26:53.062 slat (usec): min=14, max=46884, avg=1916.87, stdev=4725.55 00:26:53.062 clat (msec): min=2, max=405, avg=134.41, stdev=97.03 00:26:53.062 lat (msec): min=2, max=405, avg=136.33, stdev=98.42 00:26:53.062 clat percentiles (msec): 00:26:53.062 | 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 61], 00:26:53.062 | 30.00th=[ 71], 40.00th=[ 105], 50.00th=[ 116], 60.00th=[ 123], 00:26:53.062 | 70.00th=[ 129], 80.00th=[ 226], 90.00th=[ 313], 95.00th=[ 334], 00:26:53.062 | 99.00th=[ 380], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 405], 00:26:53.062 | 99.99th=[ 405] 00:26:53.062 bw ( KiB/s): min=43008, max=302592, per=10.31%, avg=119603.20, stdev=74065.64, samples=20 00:26:53.062 iops : min= 168, max= 1182, avg=467.20, stdev=289.32, samples=20 00:26:53.062 lat (msec) : 4=0.02%, 10=0.57%, 20=5.95%, 50=8.85%, 100=20.92% 00:26:53.062 lat (msec) : 250=46.05%, 500=17.63% 00:26:53.062 cpu : usr=1.04%, sys=1.40%, ctx=1863, majf=0, minf=2 00:26:53.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:53.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.062 issued rwts: total=0,4736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.062 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.062 job9: (groupid=0, jobs=1): err= 0: pid=3601603: Sat Oct 12 22:15:10 2024 00:26:53.062 write: IOPS=201, BW=50.4MiB/s (52.8MB/s)(512MiB/10158msec); 0 zone resets 00:26:53.062 slat (usec): min=22, max=83315, avg=4377.74, stdev=8938.95 00:26:53.062 clat (msec): min=51, max=560, avg=313.03, stdev=70.05 00:26:53.062 lat (msec): min=51, max=560, avg=317.41, stdev=70.84 00:26:53.062 clat percentiles (msec): 00:26:53.062 | 1.00th=[ 84], 5.00th=[ 155], 10.00th=[ 230], 20.00th=[ 279], 00:26:53.062 | 30.00th=[ 305], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 334], 00:26:53.062 | 70.00th=[ 342], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 401], 00:26:53.062 | 99.00th=[ 435], 99.50th=[ 485], 99.90th=[ 535], 99.95th=[ 558], 00:26:53.062 | 99.99th=[ 558] 00:26:53.062 bw ( KiB/s): min=38912, max=79872, per=4.38%, avg=50769.05, stdev=8941.86, samples=20 00:26:53.062 iops : min= 152, max= 312, avg=198.30, stdev=34.95, samples=20 00:26:53.062 lat (msec) : 100=2.10%, 250=13.14%, 500=84.27%, 750=0.49% 00:26:53.062 cpu : usr=0.47%, sys=0.62%, ctx=707, majf=0, minf=1 00:26:53.062 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:53.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.062 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.062 issued rwts: total=0,2047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.062 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.062 job10: (groupid=0, jobs=1): err= 0: pid=3601604: Sat Oct 12 22:15:10 2024 00:26:53.062 write: IOPS=661, BW=165MiB/s (173MB/s)(1663MiB/10051msec); 0 zone resets 00:26:53.062 slat (usec): min=23, max=101695, avg=1465.58, stdev=2910.71 00:26:53.062 clat (msec): min=4, max=314, avg=95.21, stdev=25.89 00:26:53.062 lat (msec): min=4, max=314, avg=96.68, stdev=26.10 00:26:53.062 clat percentiles (msec): 00:26:53.062 | 1.00th=[ 26], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 83], 00:26:53.062 | 30.00th=[ 91], 40.00th=[ 97], 50.00th=[ 101], 60.00th=[ 104], 00:26:53.062 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 110], 95.00th=[ 111], 00:26:53.062 | 99.00th=[ 194], 99.50th=[ 218], 99.90th=[ 288], 99.95th=[ 300], 00:26:53.062 | 99.99th=[ 313] 00:26:53.062 bw ( KiB/s): min=79519, max=284672, per=14.54%, avg=168635.15, stdev=45020.52, samples=20 00:26:53.062 iops : min= 310, max= 1112, avg=658.70, stdev=175.93, samples=20 00:26:53.062 lat (msec) : 10=0.38%, 20=0.44%, 50=1.04%, 100=47.44%, 250=50.47% 00:26:53.062 lat (msec) : 500=0.24% 00:26:53.062 cpu : usr=1.24%, sys=1.96%, ctx=1775, majf=0, minf=1 00:26:53.062 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:53.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:53.062 issued rwts: total=0,6650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.062 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:53.062 00:26:53.062 Run status group 0 (all jobs): 00:26:53.062 WRITE: bw=1133MiB/s (1188MB/s), 50.4MiB/s-165MiB/s (52.8MB/s-173MB/s), io=11.2GiB (12.1GB), run=10051-10159msec 00:26:53.062 00:26:53.062 Disk stats (read/write): 00:26:53.062 nvme0n1: ios=49/6942, merge=0/0, ticks=284/1219955, in_queue=1220239, util=97.16% 00:26:53.062 nvme10n1: ios=49/7708, merge=0/0, ticks=455/1229500, in_queue=1229955, util=97.83% 00:26:53.062 nvme1n1: ios=42/4628, merge=0/0, ticks=2205/1216357, in_queue=1218562, util=100.00% 00:26:53.062 nvme2n1: ios=0/8274, merge=0/0, ticks=0/1214191, in_queue=1214191, util=97.16% 00:26:53.062 nvme3n1: ios=0/5961, merge=0/0, ticks=0/1216559, in_queue=1216559, util=97.26% 00:26:53.062 nvme4n1: ios=0/12240, merge=0/0, ticks=0/1199476, in_queue=1199476, util=97.65% 00:26:53.062 nvme5n1: ios=13/12346, merge=0/0, ticks=349/1197189, in_queue=1197538, util=98.13% 00:26:53.062 nvme6n1: ios=13/5950, merge=0/0, ticks=362/1219911, in_queue=1220273, util=98.22% 00:26:53.062 nvme7n1: ios=42/9457, merge=0/0, ticks=202/1232876, in_queue=1233078, util=99.86% 00:26:53.062 nvme8n1: ios=38/4010, merge=0/0, ticks=1197/1219337, in_queue=1220534, util=99.85% 00:26:53.062 nvme9n1: ios=38/12822, merge=0/0, ticks=1706/1199728, in_queue=1201434, util=99.84% 00:26:53.062 22:15:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:53.062 22:15:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:53.062 22:15:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.062 22:15:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:53.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.062 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:53.323 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.323 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:53.583 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:53.583 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:53.583 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.583 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.583 22:15:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.583 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:53.843 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.843 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:54.414 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:54.414 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:54.414 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.674 22:15:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:54.674 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.674 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:54.935 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.935 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:55.196 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:55.196 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.196 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:55.457 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.457 rmmod nvme_tcp 00:26:55.457 rmmod nvme_fabrics 00:26:55.457 rmmod nvme_keyring 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 3591323 ']' 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 3591323 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3591323 ']' 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3591323 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3591323 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3591323' 00:26:55.457 killing process with pid 3591323 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3591323 00:26:55.457 22:15:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3591323 00:26:55.718 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:55.718 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:55.718 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:55.718 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.719 22:15:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.264 00:26:58.264 real 1m18.204s 00:26:58.264 user 4m59.022s 00:26:58.264 sys 0m15.913s 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:58.264 ************************************ 00:26:58.264 END TEST nvmf_multiconnection 00:26:58.264 ************************************ 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:58.264 ************************************ 00:26:58.264 START TEST nvmf_initiator_timeout 00:26:58.264 ************************************ 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:58.264 * Looking for test storage... 00:26:58.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:58.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.264 --rc genhtml_branch_coverage=1 00:26:58.264 --rc genhtml_function_coverage=1 00:26:58.264 --rc genhtml_legend=1 00:26:58.264 --rc geninfo_all_blocks=1 00:26:58.264 --rc geninfo_unexecuted_blocks=1 00:26:58.264 00:26:58.264 ' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:58.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.264 --rc genhtml_branch_coverage=1 00:26:58.264 --rc genhtml_function_coverage=1 00:26:58.264 --rc genhtml_legend=1 00:26:58.264 --rc geninfo_all_blocks=1 00:26:58.264 --rc geninfo_unexecuted_blocks=1 00:26:58.264 00:26:58.264 ' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:58.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.264 --rc genhtml_branch_coverage=1 00:26:58.264 --rc genhtml_function_coverage=1 00:26:58.264 --rc genhtml_legend=1 00:26:58.264 --rc geninfo_all_blocks=1 00:26:58.264 --rc geninfo_unexecuted_blocks=1 00:26:58.264 00:26:58.264 ' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:58.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.264 --rc genhtml_branch_coverage=1 00:26:58.264 --rc genhtml_function_coverage=1 00:26:58.264 --rc genhtml_legend=1 00:26:58.264 --rc geninfo_all_blocks=1 00:26:58.264 --rc geninfo_unexecuted_blocks=1 00:26:58.264 00:26:58.264 ' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.264 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.265 22:15:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.406 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.406 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:06.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:06.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:06.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ up == up ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:06.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # is_hw=yes 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.407 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:27:06.408 00:27:06.408 --- 10.0.0.2 ping statistics --- 00:27:06.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.408 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:27:06.408 00:27:06.408 --- 10.0.0.1 ping statistics --- 00:27:06.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.408 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # return 0 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:06.408 22:15:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=3608291 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 3608291 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3608291 ']' 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.408 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.408 [2024-10-12 22:15:24.083744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:06.408 [2024-10-12 22:15:24.083809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.408 [2024-10-12 22:15:24.172564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.408 [2024-10-12 22:15:24.221483] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.408 [2024-10-12 22:15:24.221536] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.408 [2024-10-12 22:15:24.221545] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.408 [2024-10-12 22:15:24.221557] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.408 [2024-10-12 22:15:24.221563] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.408 [2024-10-12 22:15:24.221714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.408 [2024-10-12 22:15:24.221871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.408 [2024-10-12 22:15:24.221991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.408 [2024-10-12 22:15:24.221993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 Malloc0 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 Delay0 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 [2024-10-12 22:15:24.995793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.669 [2024-10-12 22:15:25.036214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.669 22:15:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:08.583 22:15:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:08.583 22:15:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:08.583 22:15:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.583 22:15:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:08.583 22:15:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3609117 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:10.508 22:15:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:10.508 [global] 00:27:10.508 thread=1 00:27:10.508 invalidate=1 00:27:10.508 rw=write 00:27:10.508 time_based=1 00:27:10.508 runtime=60 00:27:10.508 ioengine=libaio 00:27:10.508 direct=1 00:27:10.508 bs=4096 00:27:10.508 iodepth=1 00:27:10.508 norandommap=0 00:27:10.508 numjobs=1 00:27:10.508 00:27:10.508 verify_dump=1 00:27:10.508 verify_backlog=512 00:27:10.508 verify_state_save=0 00:27:10.508 do_verify=1 00:27:10.508 verify=crc32c-intel 00:27:10.508 [job0] 00:27:10.508 filename=/dev/nvme0n1 00:27:10.508 Could not set queue depth (nvme0n1) 00:27:10.769 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:10.769 fio-3.35 00:27:10.769 Starting 1 thread 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 true 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 true 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 true 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.313 true 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.313 22:15:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.614 true 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.614 true 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.614 true 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.614 true 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:16.614 22:15:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3609117 00:28:12.974 00:28:12.974 job0: (groupid=0, jobs=1): err= 0: pid=3609299: Sat Oct 12 22:16:29 2024 00:28:12.974 read: IOPS=7, BW=28.8KiB/s (29.5kB/s)(1728KiB/60007msec) 00:28:12.974 slat (nsec): min=21880, max=63013, avg=26635.69, stdev=1917.28 00:28:12.974 clat (usec): min=711, max=41927k, avg=138066.87, stdev=2015237.90 00:28:12.974 lat (usec): min=740, max=41927k, avg=138093.50, stdev=2015237.89 00:28:12.974 clat percentiles (usec): 00:28:12.974 | 1.00th=[ 1057], 5.00th=[ 41157], 10.00th=[ 41681], 00:28:12.974 | 20.00th=[ 41681], 30.00th=[ 42206], 40.00th=[ 42206], 00:28:12.974 | 50.00th=[ 42206], 60.00th=[ 42206], 70.00th=[ 42206], 00:28:12.974 | 80.00th=[ 42206], 90.00th=[ 42730], 95.00th=[ 42730], 00:28:12.974 | 99.00th=[ 43254], 99.50th=[ 43254], 99.90th=[17112761], 00:28:12.974 | 99.95th=[17112761], 99.99th=[17112761] 00:28:12.974 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60007msec); 0 zone resets 00:28:12.974 slat (usec): min=9, max=24922, avg=77.79, stdev=1100.20 00:28:12.974 clat (usec): min=225, max=887, avg=587.38, stdev=109.38 00:28:12.974 lat (usec): min=237, max=25612, avg=665.16, stdev=1110.64 00:28:12.974 clat percentiles (usec): 00:28:12.974 | 1.00th=[ 322], 5.00th=[ 388], 10.00th=[ 424], 20.00th=[ 502], 00:28:12.974 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:28:12.974 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 742], 00:28:12.974 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 889], 99.95th=[ 889], 00:28:12.974 | 99.99th=[ 889] 00:28:12.974 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:28:12.974 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:28:12.974 lat (usec) : 250=0.21%, 500=10.49%, 750=41.74%, 1000=2.22% 00:28:12.974 lat (msec) : 2=0.64%, 50=44.60%, >=2000=0.11% 00:28:12.974 cpu : usr=0.04%, sys=0.03%, ctx=950, majf=0, minf=1 00:28:12.974 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:12.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.974 issued rwts: total=432,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.974 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:12.974 00:28:12.974 Run status group 0 (all jobs): 00:28:12.974 READ: bw=28.8KiB/s (29.5kB/s), 28.8KiB/s-28.8KiB/s (29.5kB/s-29.5kB/s), io=1728KiB (1769kB), run=60007-60007msec 00:28:12.974 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60007-60007msec 00:28:12.974 00:28:12.974 Disk stats (read/write): 00:28:12.974 nvme0n1: ios=481/512, merge=0/0, ticks=18627/285, in_queue=18912, util=99.94% 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:12.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:12.974 nvmf hotplug test: fio successful as expected 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.974 rmmod nvme_tcp 00:28:12.974 rmmod nvme_fabrics 00:28:12.974 rmmod nvme_keyring 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 3608291 ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3608291 ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3608291' 00:28:12.974 killing process with pid 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3608291 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.974 22:16:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.235 22:16:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.235 00:28:13.235 real 1m15.376s 00:28:13.235 user 4m38.698s 00:28:13.236 sys 0m7.540s 00:28:13.236 22:16:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.236 22:16:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.236 ************************************ 00:28:13.236 END TEST nvmf_initiator_timeout 00:28:13.236 ************************************ 00:28:13.497 22:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:13.497 22:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:13.497 22:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:13.497 22:16:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.497 22:16:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:21.651 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:21.651 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.651 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:21.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:21.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:21.652 ************************************ 00:28:21.652 START TEST nvmf_perf_adq 00:28:21.652 ************************************ 00:28:21.652 22:16:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:21.652 * Looking for test storage... 00:28:21.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:21.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.652 --rc genhtml_branch_coverage=1 00:28:21.652 --rc genhtml_function_coverage=1 00:28:21.652 --rc genhtml_legend=1 00:28:21.652 --rc geninfo_all_blocks=1 00:28:21.652 --rc geninfo_unexecuted_blocks=1 00:28:21.652 00:28:21.652 ' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:21.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.652 --rc genhtml_branch_coverage=1 00:28:21.652 --rc genhtml_function_coverage=1 00:28:21.652 --rc genhtml_legend=1 00:28:21.652 --rc geninfo_all_blocks=1 00:28:21.652 --rc geninfo_unexecuted_blocks=1 00:28:21.652 00:28:21.652 ' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:21.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.652 --rc genhtml_branch_coverage=1 00:28:21.652 --rc genhtml_function_coverage=1 00:28:21.652 --rc genhtml_legend=1 00:28:21.652 --rc geninfo_all_blocks=1 00:28:21.652 --rc geninfo_unexecuted_blocks=1 00:28:21.652 00:28:21.652 ' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:21.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.652 --rc genhtml_branch_coverage=1 00:28:21.652 --rc genhtml_function_coverage=1 00:28:21.652 --rc genhtml_legend=1 00:28:21.652 --rc geninfo_all_blocks=1 00:28:21.652 --rc geninfo_unexecuted_blocks=1 00:28:21.652 00:28:21.652 ' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.652 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.653 22:16:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:28.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:28.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:28.244 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:28.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:28.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:28.245 22:16:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:29.645 22:16:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:31.568 22:16:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:36.854 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:36.855 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:36.855 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:36.855 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:36.855 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.855 22:16:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:28:36.855 00:28:36.855 --- 10.0.0.2 ping statistics --- 00:28:36.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.855 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:28:36.855 00:28:36.855 --- 10.0.0.1 ping statistics --- 00:28:36.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.855 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3630440 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3630440 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3630440 ']' 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.855 22:16:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.855 [2024-10-12 22:16:55.255402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:36.855 [2024-10-12 22:16:55.255465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.116 [2024-10-12 22:16:55.345365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.116 [2024-10-12 22:16:55.393830] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.117 [2024-10-12 22:16:55.393881] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.117 [2024-10-12 22:16:55.393890] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.117 [2024-10-12 22:16:55.393897] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.117 [2024-10-12 22:16:55.393903] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.117 [2024-10-12 22:16:55.394516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.117 [2024-10-12 22:16:55.394650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.117 [2024-10-12 22:16:55.394806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.117 [2024-10-12 22:16:55.394807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.687 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.687 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:37.687 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:37.687 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.687 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.688 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 [2024-10-12 22:16:56.285954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 Malloc1 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.948 [2024-10-12 22:16:56.351436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3630630 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:37.948 22:16:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:40.491 "tick_rate": 2400000000, 00:28:40.491 "poll_groups": [ 00:28:40.491 { 00:28:40.491 "name": "nvmf_tgt_poll_group_000", 00:28:40.491 "admin_qpairs": 1, 00:28:40.491 "io_qpairs": 1, 00:28:40.491 "current_admin_qpairs": 1, 00:28:40.491 "current_io_qpairs": 1, 00:28:40.491 "pending_bdev_io": 0, 00:28:40.491 "completed_nvme_io": 16464, 00:28:40.491 "transports": [ 00:28:40.491 { 00:28:40.491 "trtype": "TCP" 00:28:40.491 } 00:28:40.491 ] 00:28:40.491 }, 00:28:40.491 { 00:28:40.491 "name": "nvmf_tgt_poll_group_001", 00:28:40.491 "admin_qpairs": 0, 00:28:40.491 "io_qpairs": 1, 00:28:40.491 "current_admin_qpairs": 0, 00:28:40.491 "current_io_qpairs": 1, 00:28:40.491 "pending_bdev_io": 0, 00:28:40.491 "completed_nvme_io": 16571, 00:28:40.491 "transports": [ 00:28:40.491 { 00:28:40.491 "trtype": "TCP" 00:28:40.491 } 00:28:40.491 ] 00:28:40.491 }, 00:28:40.491 { 00:28:40.491 "name": "nvmf_tgt_poll_group_002", 00:28:40.491 "admin_qpairs": 0, 00:28:40.491 "io_qpairs": 1, 00:28:40.491 "current_admin_qpairs": 0, 00:28:40.491 "current_io_qpairs": 1, 00:28:40.491 "pending_bdev_io": 0, 00:28:40.491 "completed_nvme_io": 17988, 00:28:40.491 "transports": [ 00:28:40.491 { 00:28:40.491 "trtype": "TCP" 00:28:40.491 } 00:28:40.491 ] 00:28:40.491 }, 00:28:40.491 { 00:28:40.491 "name": "nvmf_tgt_poll_group_003", 00:28:40.491 "admin_qpairs": 0, 00:28:40.491 "io_qpairs": 1, 00:28:40.491 "current_admin_qpairs": 0, 00:28:40.491 "current_io_qpairs": 1, 00:28:40.491 "pending_bdev_io": 0, 00:28:40.491 "completed_nvme_io": 16152, 00:28:40.491 "transports": [ 00:28:40.491 { 00:28:40.491 "trtype": "TCP" 00:28:40.491 } 00:28:40.491 ] 00:28:40.491 } 00:28:40.491 ] 00:28:40.491 }' 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:40.491 22:16:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3630630 00:28:48.625 Initializing NVMe Controllers 00:28:48.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:48.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:48.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:48.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:48.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:48.625 Initialization complete. Launching workers. 00:28:48.625 ======================================================== 00:28:48.625 Latency(us) 00:28:48.625 Device Information : IOPS MiB/s Average min max 00:28:48.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12325.00 48.14 5193.48 1365.42 12822.41 00:28:48.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13232.60 51.69 4835.88 1398.24 14336.75 00:28:48.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13393.80 52.32 4777.89 1270.33 14258.02 00:28:48.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12802.50 50.01 4998.90 1174.58 12735.16 00:28:48.625 ======================================================== 00:28:48.625 Total : 51753.88 202.16 4946.36 1174.58 14336.75 00:28:48.625 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.625 rmmod nvme_tcp 00:28:48.625 rmmod nvme_fabrics 00:28:48.625 rmmod nvme_keyring 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3630440 ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3630440 ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3630440' 00:28:48.625 killing process with pid 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3630440 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.625 22:17:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.535 22:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.535 22:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:50.535 22:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:50.535 22:17:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:52.445 22:17:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:54.358 22:17:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:28:59.647 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:59.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:59.648 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:59.648 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:59.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:28:59.648 00:28:59.648 --- 10.0.0.2 ping statistics --- 00:28:59.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.648 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:28:59.648 00:28:59.648 --- 10.0.0.1 ping statistics --- 00:28:59.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.648 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:59.648 net.core.busy_poll = 1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:59.648 net.core.busy_read = 1 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:59.648 22:17:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=3635150 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 3635150 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3635150 ']' 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:59.910 22:17:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.910 [2024-10-12 22:17:18.307524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:59.910 [2024-10-12 22:17:18.307593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.171 [2024-10-12 22:17:18.399578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.171 [2024-10-12 22:17:18.447396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.171 [2024-10-12 22:17:18.447446] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.171 [2024-10-12 22:17:18.447459] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.171 [2024-10-12 22:17:18.447466] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.171 [2024-10-12 22:17:18.447472] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.171 [2024-10-12 22:17:18.447636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.171 [2024-10-12 22:17:18.447799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.171 [2024-10-12 22:17:18.447959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.171 [2024-10-12 22:17:18.447960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.744 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 [2024-10-12 22:17:19.326818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 Malloc1 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.005 [2024-10-12 22:17:19.391270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3635439 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:01.005 22:17:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:02.919 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:02.919 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.919 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.180 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.180 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:03.180 "tick_rate": 2400000000, 00:29:03.180 "poll_groups": [ 00:29:03.180 { 00:29:03.180 "name": "nvmf_tgt_poll_group_000", 00:29:03.180 "admin_qpairs": 1, 00:29:03.180 "io_qpairs": 2, 00:29:03.180 "current_admin_qpairs": 1, 00:29:03.180 "current_io_qpairs": 2, 00:29:03.180 "pending_bdev_io": 0, 00:29:03.180 "completed_nvme_io": 26401, 00:29:03.180 "transports": [ 00:29:03.180 { 00:29:03.180 "trtype": "TCP" 00:29:03.180 } 00:29:03.180 ] 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "nvmf_tgt_poll_group_001", 00:29:03.180 "admin_qpairs": 0, 00:29:03.180 "io_qpairs": 2, 00:29:03.180 "current_admin_qpairs": 0, 00:29:03.180 "current_io_qpairs": 2, 00:29:03.180 "pending_bdev_io": 0, 00:29:03.180 "completed_nvme_io": 29589, 00:29:03.180 "transports": [ 00:29:03.180 { 00:29:03.180 "trtype": "TCP" 00:29:03.180 } 00:29:03.180 ] 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "nvmf_tgt_poll_group_002", 00:29:03.180 "admin_qpairs": 0, 00:29:03.180 "io_qpairs": 0, 00:29:03.180 "current_admin_qpairs": 0, 00:29:03.180 "current_io_qpairs": 0, 00:29:03.180 "pending_bdev_io": 0, 00:29:03.180 "completed_nvme_io": 0, 00:29:03.180 "transports": [ 00:29:03.180 { 00:29:03.180 "trtype": "TCP" 00:29:03.180 } 00:29:03.180 ] 00:29:03.180 }, 00:29:03.180 { 00:29:03.180 "name": "nvmf_tgt_poll_group_003", 00:29:03.180 "admin_qpairs": 0, 00:29:03.180 "io_qpairs": 0, 00:29:03.180 "current_admin_qpairs": 0, 00:29:03.180 "current_io_qpairs": 0, 00:29:03.180 "pending_bdev_io": 0, 00:29:03.180 "completed_nvme_io": 0, 00:29:03.180 "transports": [ 00:29:03.181 { 00:29:03.181 "trtype": "TCP" 00:29:03.181 } 00:29:03.181 ] 00:29:03.181 } 00:29:03.181 ] 00:29:03.181 }' 00:29:03.181 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:03.181 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:03.181 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:03.181 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:03.181 22:17:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3635439 00:29:11.319 Initializing NVMe Controllers 00:29:11.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:11.319 Initialization complete. Launching workers. 00:29:11.319 ======================================================== 00:29:11.319 Latency(us) 00:29:11.319 Device Information : IOPS MiB/s Average min max 00:29:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9057.18 35.38 7087.83 1339.10 53299.54 00:29:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9601.58 37.51 6664.63 1289.08 55078.26 00:29:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9969.17 38.94 6419.47 1011.91 55203.68 00:29:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9270.28 36.21 6903.61 1271.93 53652.75 00:29:11.319 ======================================================== 00:29:11.319 Total : 37898.20 148.04 6759.74 1011.91 55203.68 00:29:11.319 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.319 rmmod nvme_tcp 00:29:11.319 rmmod nvme_fabrics 00:29:11.319 rmmod nvme_keyring 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 3635150 ']' 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 3635150 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3635150 ']' 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3635150 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3635150 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3635150' 00:29:11.319 killing process with pid 3635150 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3635150 00:29:11.319 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3635150 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.580 22:17:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:13.493 00:29:13.493 real 0m53.024s 00:29:13.493 user 2m49.837s 00:29:13.493 sys 0m11.613s 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.493 ************************************ 00:29:13.493 END TEST nvmf_perf_adq 00:29:13.493 ************************************ 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:13.493 22:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:13.494 22:17:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:13.755 ************************************ 00:29:13.755 START TEST nvmf_shutdown 00:29:13.755 ************************************ 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:13.755 * Looking for test storage... 00:29:13.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:13.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.755 --rc genhtml_branch_coverage=1 00:29:13.755 --rc genhtml_function_coverage=1 00:29:13.755 --rc genhtml_legend=1 00:29:13.755 --rc geninfo_all_blocks=1 00:29:13.755 --rc geninfo_unexecuted_blocks=1 00:29:13.755 00:29:13.755 ' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:13.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.755 --rc genhtml_branch_coverage=1 00:29:13.755 --rc genhtml_function_coverage=1 00:29:13.755 --rc genhtml_legend=1 00:29:13.755 --rc geninfo_all_blocks=1 00:29:13.755 --rc geninfo_unexecuted_blocks=1 00:29:13.755 00:29:13.755 ' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:13.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.755 --rc genhtml_branch_coverage=1 00:29:13.755 --rc genhtml_function_coverage=1 00:29:13.755 --rc genhtml_legend=1 00:29:13.755 --rc geninfo_all_blocks=1 00:29:13.755 --rc geninfo_unexecuted_blocks=1 00:29:13.755 00:29:13.755 ' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:13.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.755 --rc genhtml_branch_coverage=1 00:29:13.755 --rc genhtml_function_coverage=1 00:29:13.755 --rc genhtml_legend=1 00:29:13.755 --rc geninfo_all_blocks=1 00:29:13.755 --rc geninfo_unexecuted_blocks=1 00:29:13.755 00:29:13.755 ' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.755 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:13.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:13.756 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.016 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:14.016 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@169 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.017 ************************************ 00:29:14.017 START TEST nvmf_shutdown_tc1 00:29:14.017 ************************************ 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.017 22:17:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:22.160 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:22.160 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:22.160 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:22.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:22.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:29:22.161 00:29:22.161 --- 10.0.0.2 ping statistics --- 00:29:22.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.161 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:22.161 00:29:22.161 --- 10.0.0.1 ping statistics --- 00:29:22.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.161 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=3641654 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 3641654 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3641654 ']' 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.161 22:17:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.161 [2024-10-12 22:17:39.837199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:22.161 [2024-10-12 22:17:39.837266] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.161 [2024-10-12 22:17:39.927717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.161 [2024-10-12 22:17:39.977167] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.161 [2024-10-12 22:17:39.977240] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.161 [2024-10-12 22:17:39.977249] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.161 [2024-10-12 22:17:39.977256] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.161 [2024-10-12 22:17:39.977262] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.161 [2024-10-12 22:17:39.977459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.161 [2024-10-12 22:17:39.977621] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.161 [2024-10-12 22:17:39.977962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.161 [2024-10-12 22:17:39.977962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.423 [2024-10-12 22:17:40.719440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.423 22:17:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.423 Malloc1 00:29:22.423 [2024-10-12 22:17:40.836684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.423 Malloc2 00:29:22.423 Malloc3 00:29:22.685 Malloc4 00:29:22.685 Malloc5 00:29:22.685 Malloc6 00:29:22.685 Malloc7 00:29:22.685 Malloc8 00:29:22.947 Malloc9 00:29:22.947 Malloc10 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3641963 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3641963 /var/tmp/bdevperf.sock 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3641963 ']' 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.947 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.947 { 00:29:22.947 "params": { 00:29:22.947 "name": "Nvme$subsystem", 00:29:22.947 "trtype": "$TEST_TRANSPORT", 00:29:22.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.947 "adrfam": "ipv4", 00:29:22.947 "trsvcid": "$NVMF_PORT", 00:29:22.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.947 "hdgst": ${hdgst:-false}, 00:29:22.947 "ddgst": ${ddgst:-false} 00:29:22.947 }, 00:29:22.947 "method": "bdev_nvme_attach_controller" 00:29:22.947 } 00:29:22.947 EOF 00:29:22.947 )") 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.948 [2024-10-12 22:17:41.362654] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:22.948 [2024-10-12 22:17:41.362727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.948 { 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme$subsystem", 00:29:22.948 "trtype": "$TEST_TRANSPORT", 00:29:22.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "$NVMF_PORT", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.948 "hdgst": ${hdgst:-false}, 00:29:22.948 "ddgst": ${ddgst:-false} 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 } 00:29:22.948 EOF 00:29:22.948 )") 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.948 { 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme$subsystem", 00:29:22.948 "trtype": "$TEST_TRANSPORT", 00:29:22.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "$NVMF_PORT", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.948 "hdgst": ${hdgst:-false}, 00:29:22.948 "ddgst": ${ddgst:-false} 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 } 00:29:22.948 EOF 00:29:22.948 )") 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:22.948 { 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme$subsystem", 00:29:22.948 "trtype": "$TEST_TRANSPORT", 00:29:22.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "$NVMF_PORT", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.948 "hdgst": ${hdgst:-false}, 00:29:22.948 "ddgst": ${ddgst:-false} 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 } 00:29:22.948 EOF 00:29:22.948 )") 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:22.948 22:17:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme1", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme2", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme3", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme4", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme5", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme6", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme7", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme8", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme9", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 },{ 00:29:22.948 "params": { 00:29:22.948 "name": "Nvme10", 00:29:22.948 "trtype": "tcp", 00:29:22.948 "traddr": "10.0.0.2", 00:29:22.948 "adrfam": "ipv4", 00:29:22.948 "trsvcid": "4420", 00:29:22.948 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:22.948 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:22.948 "hdgst": false, 00:29:22.948 "ddgst": false 00:29:22.948 }, 00:29:22.948 "method": "bdev_nvme_attach_controller" 00:29:22.948 }' 00:29:23.210 [2024-10-12 22:17:41.449901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.210 [2024-10-12 22:17:41.497351] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3641963 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:24.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3641963 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:24.652 22:17:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3641654 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 [2024-10-12 22:17:43.886289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:25.619 [2024-10-12 22:17:43.886345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642605 ] 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.619 "name": "Nvme$subsystem", 00:29:25.619 "trtype": "$TEST_TRANSPORT", 00:29:25.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.619 "adrfam": "ipv4", 00:29:25.619 "trsvcid": "$NVMF_PORT", 00:29:25.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.619 "hdgst": ${hdgst:-false}, 00:29:25.619 "ddgst": ${ddgst:-false} 00:29:25.619 }, 00:29:25.619 "method": "bdev_nvme_attach_controller" 00:29:25.619 } 00:29:25.619 EOF 00:29:25.619 )") 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.619 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.619 { 00:29:25.619 "params": { 00:29:25.620 "name": "Nvme$subsystem", 00:29:25.620 "trtype": "$TEST_TRANSPORT", 00:29:25.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "$NVMF_PORT", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.620 "hdgst": ${hdgst:-false}, 00:29:25.620 "ddgst": ${ddgst:-false} 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 } 00:29:25.620 EOF 00:29:25.620 )") 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:25.620 { 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme$subsystem", 00:29:25.620 "trtype": "$TEST_TRANSPORT", 00:29:25.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "$NVMF_PORT", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.620 "hdgst": ${hdgst:-false}, 00:29:25.620 "ddgst": ${ddgst:-false} 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 } 00:29:25.620 EOF 00:29:25.620 )") 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:29:25.620 22:17:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme1", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme2", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme3", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme4", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme5", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme6", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme7", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme8", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme9", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 },{ 00:29:25.620 "params": { 00:29:25.620 "name": "Nvme10", 00:29:25.620 "trtype": "tcp", 00:29:25.620 "traddr": "10.0.0.2", 00:29:25.620 "adrfam": "ipv4", 00:29:25.620 "trsvcid": "4420", 00:29:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:25.620 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:25.620 "hdgst": false, 00:29:25.620 "ddgst": false 00:29:25.620 }, 00:29:25.620 "method": "bdev_nvme_attach_controller" 00:29:25.620 }' 00:29:25.620 [2024-10-12 22:17:43.966996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.620 [2024-10-12 22:17:43.997909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.010 Running I/O for 1 seconds... 00:29:28.211 1867.00 IOPS, 116.69 MiB/s 00:29:28.211 Latency(us) 00:29:28.211 [2024-10-12T20:17:46.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.211 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme1n1 : 1.16 276.87 17.30 0.00 0.00 228889.26 17257.81 242920.11 00:29:28.211 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme2n1 : 1.05 183.37 11.46 0.00 0.00 339081.10 20971.52 304087.04 00:29:28.211 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme3n1 : 1.15 278.83 17.43 0.00 0.00 218258.43 22063.79 220200.96 00:29:28.211 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme4n1 : 1.13 227.09 14.19 0.00 0.00 264375.89 17694.72 232434.35 00:29:28.211 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme5n1 : 1.18 271.66 16.98 0.00 0.00 218023.08 21954.56 232434.35 00:29:28.211 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme6n1 : 1.14 229.94 14.37 0.00 0.00 250737.94 7973.55 249910.61 00:29:28.211 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme7n1 : 1.12 227.62 14.23 0.00 0.00 249926.61 17476.27 246415.36 00:29:28.211 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.211 Nvme8n1 : 1.15 223.29 13.96 0.00 0.00 250422.19 20862.29 241172.48 00:29:28.211 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.211 Verification LBA range: start 0x0 length 0x400 00:29:28.212 Nvme9n1 : 1.18 273.05 17.07 0.00 0.00 201768.53 1269.76 262144.00 00:29:28.212 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.212 Verification LBA range: start 0x0 length 0x400 00:29:28.212 Nvme10n1 : 1.19 268.43 16.78 0.00 0.00 202016.47 6389.76 283115.52 00:29:28.212 [2024-10-12T20:17:46.701Z] =================================================================================================================== 00:29:28.212 [2024-10-12T20:17:46.701Z] Total : 2460.15 153.76 0.00 0.00 236898.26 1269.76 304087.04 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.472 rmmod nvme_tcp 00:29:28.472 rmmod nvme_fabrics 00:29:28.472 rmmod nvme_keyring 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.472 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 3641654 ']' 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 3641654 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3641654 ']' 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3641654 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3641654 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3641654' 00:29:28.473 killing process with pid 3641654 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3641654 00:29:28.473 22:17:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3641654 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.734 22:17:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.279 00:29:31.279 real 0m16.905s 00:29:31.279 user 0m34.461s 00:29:31.279 sys 0m7.006s 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:31.279 ************************************ 00:29:31.279 END TEST nvmf_shutdown_tc1 00:29:31.279 ************************************ 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.279 ************************************ 00:29:31.279 START TEST nvmf_shutdown_tc2 00:29:31.279 ************************************ 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:29:31.279 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:31.280 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:31.280 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:31.280 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:31.280 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.280 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:29:31.281 00:29:31.281 --- 10.0.0.2 ping statistics --- 00:29:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.281 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:29:31.281 00:29:31.281 --- 10.0.0.1 ping statistics --- 00:29:31.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.281 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3643771 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3643771 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3643771 ']' 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:31.281 22:17:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.281 [2024-10-12 22:17:49.705202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:31.281 [2024-10-12 22:17:49.705266] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.542 [2024-10-12 22:17:49.791479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.542 [2024-10-12 22:17:49.825975] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.542 [2024-10-12 22:17:49.826010] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.542 [2024-10-12 22:17:49.826016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.542 [2024-10-12 22:17:49.826020] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.542 [2024-10-12 22:17:49.826025] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.542 [2024-10-12 22:17:49.826175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.542 [2024-10-12 22:17:49.826308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.542 [2024-10-12 22:17:49.826459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.542 [2024-10-12 22:17:49.826461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.113 [2024-10-12 22:17:50.558306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.113 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.375 22:17:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.375 Malloc1 00:29:32.375 [2024-10-12 22:17:50.656713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.375 Malloc2 00:29:32.375 Malloc3 00:29:32.375 Malloc4 00:29:32.375 Malloc5 00:29:32.375 Malloc6 00:29:32.636 Malloc7 00:29:32.636 Malloc8 00:29:32.636 Malloc9 00:29:32.636 Malloc10 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3644142 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3644142 /var/tmp/bdevperf.sock 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3644142 ']' 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.636 { 00:29:32.636 "params": { 00:29:32.636 "name": "Nvme$subsystem", 00:29:32.636 "trtype": "$TEST_TRANSPORT", 00:29:32.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.636 "adrfam": "ipv4", 00:29:32.636 "trsvcid": "$NVMF_PORT", 00:29:32.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.636 "hdgst": ${hdgst:-false}, 00:29:32.636 "ddgst": ${ddgst:-false} 00:29:32.636 }, 00:29:32.636 "method": "bdev_nvme_attach_controller" 00:29:32.636 } 00:29:32.636 EOF 00:29:32.636 )") 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.636 { 00:29:32.636 "params": { 00:29:32.636 "name": "Nvme$subsystem", 00:29:32.636 "trtype": "$TEST_TRANSPORT", 00:29:32.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.636 "adrfam": "ipv4", 00:29:32.636 "trsvcid": "$NVMF_PORT", 00:29:32.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.636 "hdgst": ${hdgst:-false}, 00:29:32.636 "ddgst": ${ddgst:-false} 00:29:32.636 }, 00:29:32.636 "method": "bdev_nvme_attach_controller" 00:29:32.636 } 00:29:32.636 EOF 00:29:32.636 )") 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.636 { 00:29:32.636 "params": { 00:29:32.636 "name": "Nvme$subsystem", 00:29:32.636 "trtype": "$TEST_TRANSPORT", 00:29:32.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.636 "adrfam": "ipv4", 00:29:32.636 "trsvcid": "$NVMF_PORT", 00:29:32.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.636 "hdgst": ${hdgst:-false}, 00:29:32.636 "ddgst": ${ddgst:-false} 00:29:32.636 }, 00:29:32.636 "method": "bdev_nvme_attach_controller" 00:29:32.636 } 00:29:32.636 EOF 00:29:32.636 )") 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.636 { 00:29:32.636 "params": { 00:29:32.636 "name": "Nvme$subsystem", 00:29:32.636 "trtype": "$TEST_TRANSPORT", 00:29:32.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.636 "adrfam": "ipv4", 00:29:32.636 "trsvcid": "$NVMF_PORT", 00:29:32.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.636 "hdgst": ${hdgst:-false}, 00:29:32.636 "ddgst": ${ddgst:-false} 00:29:32.636 }, 00:29:32.636 "method": "bdev_nvme_attach_controller" 00:29:32.636 } 00:29:32.636 EOF 00:29:32.636 )") 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.636 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.636 { 00:29:32.636 "params": { 00:29:32.637 "name": "Nvme$subsystem", 00:29:32.637 "trtype": "$TEST_TRANSPORT", 00:29:32.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.637 "adrfam": "ipv4", 00:29:32.637 "trsvcid": "$NVMF_PORT", 00:29:32.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.637 "hdgst": ${hdgst:-false}, 00:29:32.637 "ddgst": ${ddgst:-false} 00:29:32.637 }, 00:29:32.637 "method": "bdev_nvme_attach_controller" 00:29:32.637 } 00:29:32.637 EOF 00:29:32.637 )") 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.637 { 00:29:32.637 "params": { 00:29:32.637 "name": "Nvme$subsystem", 00:29:32.637 "trtype": "$TEST_TRANSPORT", 00:29:32.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.637 "adrfam": "ipv4", 00:29:32.637 "trsvcid": "$NVMF_PORT", 00:29:32.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.637 "hdgst": ${hdgst:-false}, 00:29:32.637 "ddgst": ${ddgst:-false} 00:29:32.637 }, 00:29:32.637 "method": "bdev_nvme_attach_controller" 00:29:32.637 } 00:29:32.637 EOF 00:29:32.637 )") 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.637 [2024-10-12 22:17:51.108750] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.637 { 00:29:32.637 "params": { 00:29:32.637 "name": "Nvme$subsystem", 00:29:32.637 "trtype": "$TEST_TRANSPORT", 00:29:32.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.637 "adrfam": "ipv4", 00:29:32.637 "trsvcid": "$NVMF_PORT", 00:29:32.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.637 "hdgst": ${hdgst:-false}, 00:29:32.637 "ddgst": ${ddgst:-false} 00:29:32.637 }, 00:29:32.637 "method": "bdev_nvme_attach_controller" 00:29:32.637 } 00:29:32.637 EOF 00:29:32.637 )") 00:29:32.637 [2024-10-12 22:17:51.108808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644142 ] 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.637 { 00:29:32.637 "params": { 00:29:32.637 "name": "Nvme$subsystem", 00:29:32.637 "trtype": "$TEST_TRANSPORT", 00:29:32.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.637 "adrfam": "ipv4", 00:29:32.637 "trsvcid": "$NVMF_PORT", 00:29:32.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.637 "hdgst": ${hdgst:-false}, 00:29:32.637 "ddgst": ${ddgst:-false} 00:29:32.637 }, 00:29:32.637 "method": "bdev_nvme_attach_controller" 00:29:32.637 } 00:29:32.637 EOF 00:29:32.637 )") 00:29:32.637 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.897 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.897 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.897 { 00:29:32.897 "params": { 00:29:32.897 "name": "Nvme$subsystem", 00:29:32.897 "trtype": "$TEST_TRANSPORT", 00:29:32.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.897 "adrfam": "ipv4", 00:29:32.897 "trsvcid": "$NVMF_PORT", 00:29:32.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.897 "hdgst": ${hdgst:-false}, 00:29:32.897 "ddgst": ${ddgst:-false} 00:29:32.897 }, 00:29:32.897 "method": "bdev_nvme_attach_controller" 00:29:32.897 } 00:29:32.897 EOF 00:29:32.897 )") 00:29:32.897 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:32.898 { 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme$subsystem", 00:29:32.898 "trtype": "$TEST_TRANSPORT", 00:29:32.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "$NVMF_PORT", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.898 "hdgst": ${hdgst:-false}, 00:29:32.898 "ddgst": ${ddgst:-false} 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 } 00:29:32.898 EOF 00:29:32.898 )") 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:29:32.898 22:17:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme1", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme2", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme3", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme4", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme5", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme6", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme7", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme8", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme9", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 },{ 00:29:32.898 "params": { 00:29:32.898 "name": "Nvme10", 00:29:32.898 "trtype": "tcp", 00:29:32.898 "traddr": "10.0.0.2", 00:29:32.898 "adrfam": "ipv4", 00:29:32.898 "trsvcid": "4420", 00:29:32.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:32.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:32.898 "hdgst": false, 00:29:32.898 "ddgst": false 00:29:32.898 }, 00:29:32.898 "method": "bdev_nvme_attach_controller" 00:29:32.898 }' 00:29:32.898 [2024-10-12 22:17:51.186496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.898 [2024-10-12 22:17:51.217639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.283 Running I/O for 10 seconds... 00:29:34.283 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.283 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:34.283 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:34.283 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.283 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:34.544 22:17:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:34.805 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3644142 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3644142 ']' 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3644142 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3644142 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3644142' 00:29:35.065 killing process with pid 3644142 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3644142 00:29:35.065 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3644142 00:29:35.326 Received shutdown signal, test time was about 0.983371 seconds 00:29:35.326 00:29:35.326 Latency(us) 00:29:35.326 [2024-10-12T20:17:53.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.326 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme1n1 : 0.95 202.33 12.65 0.00 0.00 312654.79 24466.77 253405.87 00:29:35.326 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme2n1 : 0.95 201.41 12.59 0.00 0.00 307642.88 16056.32 255153.49 00:29:35.326 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme3n1 : 0.97 263.35 16.46 0.00 0.00 230273.07 19223.89 251658.24 00:29:35.326 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme4n1 : 0.97 267.76 16.73 0.00 0.00 221364.70 3331.41 253405.87 00:29:35.326 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme5n1 : 0.98 260.58 16.29 0.00 0.00 223279.57 14199.47 249910.61 00:29:35.326 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme6n1 : 0.98 262.56 16.41 0.00 0.00 216495.15 17476.27 244667.73 00:29:35.326 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme7n1 : 0.96 265.99 16.62 0.00 0.00 207819.73 11960.32 248162.99 00:29:35.326 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme8n1 : 0.98 261.20 16.33 0.00 0.00 208213.33 24139.09 246415.36 00:29:35.326 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme9n1 : 0.96 199.29 12.46 0.00 0.00 264631.47 21189.97 272629.76 00:29:35.326 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.326 Verification LBA range: start 0x0 length 0x400 00:29:35.326 Nvme10n1 : 0.96 199.74 12.48 0.00 0.00 258656.71 39321.60 249910.61 00:29:35.326 [2024-10-12T20:17:53.815Z] =================================================================================================================== 00:29:35.326 [2024-10-12T20:17:53.815Z] Total : 2384.21 149.01 0.00 0.00 240537.26 3331.41 272629.76 00:29:35.326 22:17:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3643771 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.268 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.268 rmmod nvme_tcp 00:29:36.529 rmmod nvme_fabrics 00:29:36.529 rmmod nvme_keyring 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 3643771 ']' 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 3643771 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3643771 ']' 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3643771 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3643771 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3643771' 00:29:36.529 killing process with pid 3643771 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3643771 00:29:36.529 22:17:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3643771 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.789 22:17:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.703 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.703 00:29:38.703 real 0m7.904s 00:29:38.703 user 0m23.875s 00:29:38.703 sys 0m1.318s 00:29:38.703 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:38.703 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.703 ************************************ 00:29:38.703 END TEST nvmf_shutdown_tc2 00:29:38.703 ************************************ 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@171 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.965 ************************************ 00:29:38.965 START TEST nvmf_shutdown_tc3 00:29:38.965 ************************************ 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:38.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:38.965 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:38.966 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:38.966 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:38.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.966 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:29:39.228 00:29:39.228 --- 10.0.0.2 ping statistics --- 00:29:39.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.228 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:39.228 00:29:39.228 --- 10.0.0.1 ping statistics --- 00:29:39.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.228 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=3645347 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 3645347 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3645347 ']' 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:39.228 22:17:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:39.228 [2024-10-12 22:17:57.692461] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:39.228 [2024-10-12 22:17:57.692522] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.489 [2024-10-12 22:17:57.778540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.489 [2024-10-12 22:17:57.812742] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.489 [2024-10-12 22:17:57.812777] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.489 [2024-10-12 22:17:57.812783] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.489 [2024-10-12 22:17:57.812788] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.489 [2024-10-12 22:17:57.812792] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.489 [2024-10-12 22:17:57.812937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.489 [2024-10-12 22:17:57.813095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.489 [2024-10-12 22:17:57.813262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.489 [2024-10-12 22:17:57.813263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.060 [2024-10-12 22:17:58.541546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.060 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.320 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.321 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.321 Malloc1 00:29:40.321 [2024-10-12 22:17:58.640106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.321 Malloc2 00:29:40.321 Malloc3 00:29:40.321 Malloc4 00:29:40.321 Malloc5 00:29:40.321 Malloc6 00:29:40.582 Malloc7 00:29:40.582 Malloc8 00:29:40.582 Malloc9 00:29:40.582 Malloc10 00:29:40.582 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.582 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:40.582 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.582 22:17:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3645677 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3645677 /var/tmp/bdevperf.sock 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3645677 ']' 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.582 { 00:29:40.582 "params": { 00:29:40.582 "name": "Nvme$subsystem", 00:29:40.582 "trtype": "$TEST_TRANSPORT", 00:29:40.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.582 "adrfam": "ipv4", 00:29:40.582 "trsvcid": "$NVMF_PORT", 00:29:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.582 "hdgst": ${hdgst:-false}, 00:29:40.582 "ddgst": ${ddgst:-false} 00:29:40.582 }, 00:29:40.582 "method": "bdev_nvme_attach_controller" 00:29:40.582 } 00:29:40.582 EOF 00:29:40.582 )") 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.582 { 00:29:40.582 "params": { 00:29:40.582 "name": "Nvme$subsystem", 00:29:40.582 "trtype": "$TEST_TRANSPORT", 00:29:40.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.582 "adrfam": "ipv4", 00:29:40.582 "trsvcid": "$NVMF_PORT", 00:29:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.582 "hdgst": ${hdgst:-false}, 00:29:40.582 "ddgst": ${ddgst:-false} 00:29:40.582 }, 00:29:40.582 "method": "bdev_nvme_attach_controller" 00:29:40.582 } 00:29:40.582 EOF 00:29:40.582 )") 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.582 { 00:29:40.582 "params": { 00:29:40.582 "name": "Nvme$subsystem", 00:29:40.582 "trtype": "$TEST_TRANSPORT", 00:29:40.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.582 "adrfam": "ipv4", 00:29:40.582 "trsvcid": "$NVMF_PORT", 00:29:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.582 "hdgst": ${hdgst:-false}, 00:29:40.582 "ddgst": ${ddgst:-false} 00:29:40.582 }, 00:29:40.582 "method": "bdev_nvme_attach_controller" 00:29:40.582 } 00:29:40.582 EOF 00:29:40.582 )") 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.582 { 00:29:40.582 "params": { 00:29:40.582 "name": "Nvme$subsystem", 00:29:40.582 "trtype": "$TEST_TRANSPORT", 00:29:40.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.582 "adrfam": "ipv4", 00:29:40.582 "trsvcid": "$NVMF_PORT", 00:29:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.582 "hdgst": ${hdgst:-false}, 00:29:40.582 "ddgst": ${ddgst:-false} 00:29:40.582 }, 00:29:40.582 "method": "bdev_nvme_attach_controller" 00:29:40.582 } 00:29:40.582 EOF 00:29:40.582 )") 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.582 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.582 { 00:29:40.582 "params": { 00:29:40.582 "name": "Nvme$subsystem", 00:29:40.582 "trtype": "$TEST_TRANSPORT", 00:29:40.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.582 "adrfam": "ipv4", 00:29:40.582 "trsvcid": "$NVMF_PORT", 00:29:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.582 "hdgst": ${hdgst:-false}, 00:29:40.582 "ddgst": ${ddgst:-false} 00:29:40.582 }, 00:29:40.582 "method": "bdev_nvme_attach_controller" 00:29:40.582 } 00:29:40.582 EOF 00:29:40.582 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.844 { 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme$subsystem", 00:29:40.844 "trtype": "$TEST_TRANSPORT", 00:29:40.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "$NVMF_PORT", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.844 "hdgst": ${hdgst:-false}, 00:29:40.844 "ddgst": ${ddgst:-false} 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 } 00:29:40.844 EOF 00:29:40.844 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 [2024-10-12 22:17:59.079422] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:40.844 [2024-10-12 22:17:59.079475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645677 ] 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.844 { 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme$subsystem", 00:29:40.844 "trtype": "$TEST_TRANSPORT", 00:29:40.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "$NVMF_PORT", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.844 "hdgst": ${hdgst:-false}, 00:29:40.844 "ddgst": ${ddgst:-false} 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 } 00:29:40.844 EOF 00:29:40.844 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.844 { 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme$subsystem", 00:29:40.844 "trtype": "$TEST_TRANSPORT", 00:29:40.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "$NVMF_PORT", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.844 "hdgst": ${hdgst:-false}, 00:29:40.844 "ddgst": ${ddgst:-false} 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 } 00:29:40.844 EOF 00:29:40.844 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.844 { 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme$subsystem", 00:29:40.844 "trtype": "$TEST_TRANSPORT", 00:29:40.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "$NVMF_PORT", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.844 "hdgst": ${hdgst:-false}, 00:29:40.844 "ddgst": ${ddgst:-false} 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 } 00:29:40.844 EOF 00:29:40.844 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:29:40.844 { 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme$subsystem", 00:29:40.844 "trtype": "$TEST_TRANSPORT", 00:29:40.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "$NVMF_PORT", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:40.844 "hdgst": ${hdgst:-false}, 00:29:40.844 "ddgst": ${ddgst:-false} 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 } 00:29:40.844 EOF 00:29:40.844 )") 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:29:40.844 22:17:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme1", 00:29:40.844 "trtype": "tcp", 00:29:40.844 "traddr": "10.0.0.2", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "4420", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:40.844 "hdgst": false, 00:29:40.844 "ddgst": false 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 },{ 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme2", 00:29:40.844 "trtype": "tcp", 00:29:40.844 "traddr": "10.0.0.2", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "4420", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:40.844 "hdgst": false, 00:29:40.844 "ddgst": false 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 },{ 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme3", 00:29:40.844 "trtype": "tcp", 00:29:40.844 "traddr": "10.0.0.2", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "4420", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:40.844 "hdgst": false, 00:29:40.844 "ddgst": false 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 },{ 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme4", 00:29:40.844 "trtype": "tcp", 00:29:40.844 "traddr": "10.0.0.2", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "4420", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:40.844 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:40.844 "hdgst": false, 00:29:40.844 "ddgst": false 00:29:40.844 }, 00:29:40.844 "method": "bdev_nvme_attach_controller" 00:29:40.844 },{ 00:29:40.844 "params": { 00:29:40.844 "name": "Nvme5", 00:29:40.844 "trtype": "tcp", 00:29:40.844 "traddr": "10.0.0.2", 00:29:40.844 "adrfam": "ipv4", 00:29:40.844 "trsvcid": "4420", 00:29:40.844 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 },{ 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme6", 00:29:40.845 "trtype": "tcp", 00:29:40.845 "traddr": "10.0.0.2", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "4420", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 },{ 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme7", 00:29:40.845 "trtype": "tcp", 00:29:40.845 "traddr": "10.0.0.2", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "4420", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 },{ 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme8", 00:29:40.845 "trtype": "tcp", 00:29:40.845 "traddr": "10.0.0.2", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "4420", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 },{ 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme9", 00:29:40.845 "trtype": "tcp", 00:29:40.845 "traddr": "10.0.0.2", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "4420", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 },{ 00:29:40.845 "params": { 00:29:40.845 "name": "Nvme10", 00:29:40.845 "trtype": "tcp", 00:29:40.845 "traddr": "10.0.0.2", 00:29:40.845 "adrfam": "ipv4", 00:29:40.845 "trsvcid": "4420", 00:29:40.845 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:40.845 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:40.845 "hdgst": false, 00:29:40.845 "ddgst": false 00:29:40.845 }, 00:29:40.845 "method": "bdev_nvme_attach_controller" 00:29:40.845 }' 00:29:40.845 [2024-10-12 22:17:59.156916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.845 [2024-10-12 22:17:59.188449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.230 Running I/O for 10 seconds... 00:29:42.230 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.230 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:42.230 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:42.230 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.230 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:42.491 22:18:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:42.752 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3645347 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3645347 ']' 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3645347 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:43.012 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3645347 00:29:43.288 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:43.288 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:43.288 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3645347' 00:29:43.288 killing process with pid 3645347 00:29:43.288 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3645347 00:29:43.288 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3645347 00:29:43.288 [2024-10-12 22:18:01.547806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.547997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.288 [2024-10-12 22:18:01.548050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.548164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176cfa0 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.289 [2024-10-12 22:18:01.549535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.549539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d90 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.550863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d470 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.290 [2024-10-12 22:18:01.552528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.552751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176de30 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.291 [2024-10-12 22:18:01.553523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.553653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176e300 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.292 [2024-10-12 22:18:01.554954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.554997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176eb50 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.555878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.293 [2024-10-12 22:18:01.566076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.294 [2024-10-12 22:18:01.566080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.294 [2024-10-12 22:18:01.566085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.294 [2024-10-12 22:18:01.566090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.294 [2024-10-12 22:18:01.566094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176f040 is same with the state(6) to be set 00:29:43.294 [2024-10-12 22:18:01.567512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.567987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.567995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.294 [2024-10-12 22:18:01.568249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.294 [2024-10-12 22:18:01.568259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.295 [2024-10-12 22:18:01.568699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:43.295 [2024-10-12 22:18:01.568778] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26dccc0 was disconnected and freed. reset controller. 00:29:43.295 [2024-10-12 22:18:01.568880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.295 [2024-10-12 22:18:01.568894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.295 [2024-10-12 22:18:01.568912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.295 [2024-10-12 22:18:01.568929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.295 [2024-10-12 22:18:01.568951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.295 [2024-10-12 22:18:01.568959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8720 is same with the state(6) to be set 00:29:43.295 [2024-10-12 22:18:01.568985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.295 [2024-10-12 22:18:01.568995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7610 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2738a00 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2739640 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27041f0 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704580 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd980 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6e90 is same with the state(6) to be set 00:29:43.296 [2024-10-12 22:18:01.569644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.296 [2024-10-12 22:18:01.569672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.296 [2024-10-12 22:18:01.569680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b80 is same with the state(6) to be set 00:29:43.297 [2024-10-12 22:18:01.569737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.297 [2024-10-12 22:18:01.569797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27415f0 is same with the state(6) to be set 00:29:43.297 [2024-10-12 22:18:01.569933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.569952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.569973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.569992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.297 [2024-10-12 22:18:01.570342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.297 [2024-10-12 22:18:01.570350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.298 [2024-10-12 22:18:01.570782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.298 [2024-10-12 22:18:01.570790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.570984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.570992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.571001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.571009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.571019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.571026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.571036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.571044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.571054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.579620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.579674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.579690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.579707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.579721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.579751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.579769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26c8e50 is same with the state(6) to be set 00:29:43.299 [2024-10-12 22:18:01.579833] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26c8e50 was disconnected and freed. reset controller. 00:29:43.299 [2024-10-12 22:18:01.580029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.299 [2024-10-12 22:18:01.580319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.299 [2024-10-12 22:18:01.580335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.580971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.580984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.300 [2024-10-12 22:18:01.581170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.300 [2024-10-12 22:18:01.581187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.581933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.301 [2024-10-12 22:18:01.581946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.301 [2024-10-12 22:18:01.582036] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26d8d80 was disconnected and freed. reset controller. 00:29:43.301 [2024-10-12 22:18:01.583982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d8720 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584022] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e7610 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2738a00 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2739640 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27041f0 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704580 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd980 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6e90 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d8b80 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.584248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27415f0 (9): Bad file descriptor 00:29:43.301 [2024-10-12 22:18:01.588048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:43.301 [2024-10-12 22:18:01.588088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:43.302 [2024-10-12 22:18:01.588882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:43.302 [2024-10-12 22:18:01.589080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.302 [2024-10-12 22:18:01.589114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2739640 with addr=10.0.0.2, port=4420 00:29:43.302 [2024-10-12 22:18:01.589131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2739640 is same with the state(6) to be set 00:29:43.302 [2024-10-12 22:18:01.589513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.302 [2024-10-12 22:18:01.589538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d6e90 with addr=10.0.0.2, port=4420 00:29:43.302 [2024-10-12 22:18:01.589551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6e90 is same with the state(6) to be set 00:29:43.302 [2024-10-12 22:18:01.590058] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.590136] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.590536] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.590594] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.590661] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.590879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.302 [2024-10-12 22:18:01.590902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2704580 with addr=10.0.0.2, port=4420 00:29:43.302 [2024-10-12 22:18:01.590915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704580 is same with the state(6) to be set 00:29:43.302 [2024-10-12 22:18:01.590932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2739640 (9): Bad file descriptor 00:29:43.302 [2024-10-12 22:18:01.590950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6e90 (9): Bad file descriptor 00:29:43.302 [2024-10-12 22:18:01.591001] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.591063] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:43.302 [2024-10-12 22:18:01.591188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704580 (9): Bad file descriptor 00:29:43.302 [2024-10-12 22:18:01.591208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:43.302 [2024-10-12 22:18:01.591221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:43.302 [2024-10-12 22:18:01.591236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:43.302 [2024-10-12 22:18:01.591258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:43.302 [2024-10-12 22:18:01.591270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:43.302 [2024-10-12 22:18:01.591282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:43.302 [2024-10-12 22:18:01.591367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.302 [2024-10-12 22:18:01.591383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.302 [2024-10-12 22:18:01.591395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:43.302 [2024-10-12 22:18:01.591407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:43.302 [2024-10-12 22:18:01.591420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:43.302 [2024-10-12 22:18:01.591481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.302 [2024-10-12 22:18:01.594141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.302 [2024-10-12 22:18:01.594771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.302 [2024-10-12 22:18:01.594784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.594980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.594994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.303 [2024-10-12 22:18:01.595748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.303 [2024-10-12 22:18:01.595761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.595985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.595999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.596015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.596028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.596045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.596058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.596073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.596087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.596106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fd970 is same with the state(6) to be set 00:29:43.304 [2024-10-12 22:18:01.597928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.597953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.597974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.597988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.304 [2024-10-12 22:18:01.598820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-10-12 22:18:01.598833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.598851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.598865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.598881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.598894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.598911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.598925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.598941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.598954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.598970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.598983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.599858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.599872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27fec00 is same with the state(6) to be set 00:29:43.305 [2024-10-12 22:18:01.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-10-12 22:18:01.601898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.305 [2024-10-12 22:18:01.601916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.601928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.601946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.601960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.601976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.601988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.602973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.602988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-10-12 22:18:01.603182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.306 [2024-10-12 22:18:01.603200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.603558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.603567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d62f0 is same with the state(6) to be set 00:29:43.307 [2024-10-12 22:18:01.604849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.604987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.307 [2024-10-12 22:18:01.605604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.307 [2024-10-12 22:18:01.605622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.605982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.605996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.308 [2024-10-12 22:18:01.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.308 [2024-10-12 22:18:01.606648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d7800 is same with the state(6) to be set 00:29:43.309 [2024-10-12 22:18:01.608463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.608970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.309 [2024-10-12 22:18:01.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.309 [2024-10-12 22:18:01.609747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.609971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.609989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.610405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.610419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26da300 is same with the state(6) to be set 00:29:43.310 [2024-10-12 22:18:01.612188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.310 [2024-10-12 22:18:01.612548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.310 [2024-10-12 22:18:01.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.612983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.612992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.311 [2024-10-12 22:18:01.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.311 [2024-10-12 22:18:01.613336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.613344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.613354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.613362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.613371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26db880 is same with the state(6) to be set 00:29:43.312 [2024-10-12 22:18:01.615466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.615982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.615999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.312 [2024-10-12 22:18:01.616541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.312 [2024-10-12 22:18:01.616554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.616981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.616993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.313 [2024-10-12 22:18:01.617382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.313 [2024-10-12 22:18:01.617396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26de240 is same with the state(6) to be set 00:29:43.313 [2024-10-12 22:18:01.619465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.313 [2024-10-12 22:18:01.619496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:43.313 [2024-10-12 22:18:01.619513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:43.313 [2024-10-12 22:18:01.619529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:43.313 [2024-10-12 22:18:01.619640] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.313 [2024-10-12 22:18:01.619663] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.313 [2024-10-12 22:18:01.619689] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.313 [2024-10-12 22:18:01.643983] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:43.313 [2024-10-12 22:18:01.644024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:43.313 task offset: 26112 on job bdev=Nvme9n1 fails 00:29:43.313 00:29:43.313 Latency(us) 00:29:43.313 [2024-10-12T20:18:01.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.313 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme1n1 ended in about 0.99 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme1n1 : 0.99 193.50 12.09 64.50 0.00 245318.40 22937.60 234181.97 00:29:43.313 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme2n1 ended in about 1.00 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme2n1 : 1.00 128.52 8.03 64.26 0.00 322001.64 14745.60 258648.75 00:29:43.313 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme3n1 ended in about 0.98 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme3n1 : 0.98 195.83 12.24 65.28 0.00 232676.69 18131.63 262144.00 00:29:43.313 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme4n1 ended in about 1.00 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme4n1 : 1.00 192.09 12.01 64.03 0.00 232715.09 18896.21 248162.99 00:29:43.313 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme5n1 ended in about 1.00 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme5n1 : 1.00 191.48 11.97 63.83 0.00 228531.41 17148.59 248162.99 00:29:43.313 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme6n1 ended in about 0.98 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme6n1 : 0.98 199.54 12.47 65.15 0.00 215253.62 15400.96 274377.39 00:29:43.313 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme7n1 ended in about 1.01 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme7n1 : 1.01 127.17 7.95 63.59 0.00 293226.10 21736.11 274377.39 00:29:43.313 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.313 Job: Nvme8n1 ended in about 1.01 seconds with error 00:29:43.313 Verification LBA range: start 0x0 length 0x400 00:29:43.313 Nvme8n1 : 1.01 190.21 11.89 63.40 0.00 215685.97 20643.84 251658.24 00:29:43.314 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.314 Job: Nvme9n1 ended in about 0.98 seconds with error 00:29:43.314 Verification LBA range: start 0x0 length 0x400 00:29:43.314 Nvme9n1 : 0.98 196.26 12.27 65.42 0.00 203057.49 15073.28 272629.76 00:29:43.314 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:43.314 Job: Nvme10n1 ended in about 1.01 seconds with error 00:29:43.314 Verification LBA range: start 0x0 length 0x400 00:29:43.314 Nvme10n1 : 1.01 126.30 7.89 63.15 0.00 276092.02 39321.60 274377.39 00:29:43.314 [2024-10-12T20:18:01.803Z] =================================================================================================================== 00:29:43.314 [2024-10-12T20:18:01.803Z] Total : 1740.89 108.81 642.60 0.00 242303.34 14745.60 274377.39 00:29:43.314 [2024-10-12 22:18:01.672199] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:43.314 [2024-10-12 22:18:01.672239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:43.314 [2024-10-12 22:18:01.672677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.672697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d8b80 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.672708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8b80 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.672976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.672988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d8720 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.672995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d8720 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.673356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.673368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27041f0 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.673377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27041f0 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.673709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.673720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd980 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.673734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd980 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.673765] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.673778] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.673790] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.673810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd980 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.673827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27041f0 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.673841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d8720 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.673853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d8b80 (9): Bad file descriptor 00:29:43.314 1740.89 IOPS, 108.81 MiB/s [2024-10-12T20:18:01.803Z] [2024-10-12 22:18:01.675738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:43.314 [2024-10-12 22:18:01.675761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:43.314 [2024-10-12 22:18:01.676061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.676079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e7610 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.676092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e7610 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.676411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.676423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2738a00 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.676431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2738a00 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.676763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.676776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27415f0 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.676786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27415f0 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.676814] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.676825] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.676836] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.676849] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.676859] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:43.314 [2024-10-12 22:18:01.676931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:43.314 [2024-10-12 22:18:01.677282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.677298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d6e90 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.677306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d6e90 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.677635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.677646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2739640 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.677658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2739640 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.677668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e7610 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.677678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2738a00 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.677688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27415f0 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.677697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.677704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.677712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.314 [2024-10-12 22:18:01.677724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.677733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.677748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:43.314 [2024-10-12 22:18:01.677759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.677766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.677773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:43.314 [2024-10-12 22:18:01.677784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.677791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.677798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:43.314 [2024-10-12 22:18:01.677878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.677887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.677894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.677901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.314 [2024-10-12 22:18:01.678221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2704580 with addr=10.0.0.2, port=4420 00:29:43.314 [2024-10-12 22:18:01.678229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2704580 is same with the state(6) to be set 00:29:43.314 [2024-10-12 22:18:01.678239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6e90 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.678248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2739640 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.678257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.678263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.678270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:43.314 [2024-10-12 22:18:01.678280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.678288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.678299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:43.314 [2024-10-12 22:18:01.678308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.678315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.678328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:43.314 [2024-10-12 22:18:01.678363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2704580 (9): Bad file descriptor 00:29:43.314 [2024-10-12 22:18:01.678394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.678401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.678408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:43.314 [2024-10-12 22:18:01.678650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:43.314 [2024-10-12 22:18:01.678662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:43.314 [2024-10-12 22:18:01.678670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:43.314 [2024-10-12 22:18:01.678702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.314 [2024-10-12 22:18:01.678717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:43.315 [2024-10-12 22:18:01.678723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:43.315 [2024-10-12 22:18:01.678730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:43.315 [2024-10-12 22:18:01.678763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.575 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # nvmfpid= 00:29:43.575 22:18:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # sleep 1 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # kill -9 3645677 00:29:44.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 143: kill: (3645677) - No such process 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@143 -- # true 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@145 -- # stoptarget 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.516 rmmod nvme_tcp 00:29:44.516 rmmod nvme_fabrics 00:29:44.516 rmmod nvme_keyring 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.516 22:18:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.060 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.060 00:29:47.060 real 0m7.723s 00:29:47.060 user 0m18.863s 00:29:47.060 sys 0m1.271s 00:29:47.060 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.060 22:18:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.060 ************************************ 00:29:47.060 END TEST nvmf_shutdown_tc3 00:29:47.060 ************************************ 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ e810 == \e\8\1\0 ]] 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@173 -- # [[ tcp == \r\d\m\a ]] 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@174 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:47.060 ************************************ 00:29:47.060 START TEST nvmf_shutdown_tc4 00:29:47.060 ************************************ 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # starttarget 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.060 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:47.061 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:47.061 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:47.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:47.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:29:47.061 00:29:47.061 --- 10.0.0.2 ping statistics --- 00:29:47.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.061 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:29:47.061 00:29:47.061 --- 10.0.0.1 ping statistics --- 00:29:47.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.061 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.061 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=3647237 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 3647237 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3647237 ']' 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.062 22:18:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.062 [2024-10-12 22:18:05.525183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:47.062 [2024-10-12 22:18:05.525244] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.322 [2024-10-12 22:18:05.614598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.322 [2024-10-12 22:18:05.654267] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.322 [2024-10-12 22:18:05.654316] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.322 [2024-10-12 22:18:05.654322] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.322 [2024-10-12 22:18:05.654328] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.322 [2024-10-12 22:18:05.654332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.322 [2024-10-12 22:18:05.654486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.322 [2024-10-12 22:18:05.654647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.322 [2024-10-12 22:18:05.654806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.322 [2024-10-12 22:18:05.654807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:47.892 [2024-10-12 22:18:06.374633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.892 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.153 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.153 Malloc1 00:29:48.153 [2024-10-12 22:18:06.473292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.153 Malloc2 00:29:48.153 Malloc3 00:29:48.153 Malloc4 00:29:48.153 Malloc5 00:29:48.414 Malloc6 00:29:48.414 Malloc7 00:29:48.414 Malloc8 00:29:48.414 Malloc9 00:29:48.414 Malloc10 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@154 -- # perfpid=3647475 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # sleep 5 00:29:48.414 22:18:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@153 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:48.674 [2024-10-12 22:18:06.946772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@157 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@160 -- # killprocess 3647237 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3647237 ']' 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3647237 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3647237 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3647237' 00:29:53.964 killing process with pid 3647237 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3647237 00:29:53.964 22:18:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3647237 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a170 is same with Write completed with error (sct=0, sc=8) 00:29:53.964 the state(6) to be set 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a170 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.957602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a170 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.957607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a170 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.957827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.957832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.957843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.957852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127a640 is same with the state(6) to be set 00:29:53.964 starting I/O failed: -6 00:29:53.964 [2024-10-12 22:18:11.957895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.958228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.958250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 starting I/O failed: -6 00:29:53.964 [2024-10-12 22:18:11.958257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 [2024-10-12 22:18:11.958264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.958271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 starting I/O failed: -6 00:29:53.964 [2024-10-12 22:18:11.958277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 [2024-10-12 22:18:11.958283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127ab10 is same with the state(6) to be set 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.964 Write completed with error (sct=0, sc=8) 00:29:53.964 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 [2024-10-12 22:18:11.958746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 [2024-10-12 22:18:11.959664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 [2024-10-12 22:18:11.961057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.965 NVMe io qpair process completion error 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.965 starting I/O failed: -6 00:29:53.965 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with Write completed with error (sct=0, sc=8) 00:29:53.966 the state(6) to be set 00:29:53.966 starting I/O failed: -6 00:29:53.966 [2024-10-12 22:18:11.962152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with Write completed with error (sct=0, sc=8) 00:29:53.966 the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with the state(6) to be set 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6a20 is same with the state(6) to be set 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.966 [2024-10-12 22:18:11.962413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e6ef0 is same with the state(6) to be set 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with Write completed with error (sct=0, sc=8) 00:29:53.966 the state(6) to be set 00:29:53.966 starting I/O failed: -6 00:29:53.966 [2024-10-12 22:18:11.962894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 starting I/O failed: -6 00:29:53.966 [2024-10-12 22:18:11.962906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 [2024-10-12 22:18:11.962918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 [2024-10-12 22:18:11.962923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127afe0 is same with the state(6) to be set 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 [2024-10-12 22:18:11.963234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.966 starting I/O failed: -6 00:29:53.966 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 [2024-10-12 22:18:11.964147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 [2024-10-12 22:18:11.965973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.967 NVMe io qpair process completion error 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 [2024-10-12 22:18:11.966963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 starting I/O failed: -6 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.967 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 [2024-10-12 22:18:11.967778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 [2024-10-12 22:18:11.968722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.968 starting I/O failed: -6 00:29:53.968 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 [2024-10-12 22:18:11.971380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.969 NVMe io qpair process completion error 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 [2024-10-12 22:18:11.972626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 [2024-10-12 22:18:11.973553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.969 starting I/O failed: -6 00:29:53.969 [2024-10-12 22:18:11.974465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.969 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 [2024-10-12 22:18:11.976159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.970 NVMe io qpair process completion error 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 [2024-10-12 22:18:11.977234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 Write completed with error (sct=0, sc=8) 00:29:53.970 starting I/O failed: -6 00:29:53.970 [2024-10-12 22:18:11.978050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 [2024-10-12 22:18:11.978993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 [2024-10-12 22:18:11.980637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.971 NVMe io qpair process completion error 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.971 starting I/O failed: -6 00:29:53.971 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 [2024-10-12 22:18:11.981796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 [2024-10-12 22:18:11.982619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 [2024-10-12 22:18:11.983550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.972 starting I/O failed: -6 00:29:53.972 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 [2024-10-12 22:18:11.986455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.973 NVMe io qpair process completion error 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 [2024-10-12 22:18:11.987461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 [2024-10-12 22:18:11.988269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.973 Write completed with error (sct=0, sc=8) 00:29:53.973 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 [2024-10-12 22:18:11.989208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 [2024-10-12 22:18:11.990860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.974 NVMe io qpair process completion error 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 [2024-10-12 22:18:11.991783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.974 starting I/O failed: -6 00:29:53.974 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 [2024-10-12 22:18:11.992602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.975 starting I/O failed: -6 00:29:53.975 starting I/O failed: -6 00:29:53.975 starting I/O failed: -6 00:29:53.975 starting I/O failed: -6 00:29:53.975 starting I/O failed: -6 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 [2024-10-12 22:18:11.993958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.975 Write completed with error (sct=0, sc=8) 00:29:53.975 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 [2024-10-12 22:18:11.997531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.976 NVMe io qpair process completion error 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 [2024-10-12 22:18:11.998628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 [2024-10-12 22:18:11.999451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 [2024-10-12 22:18:12.000380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.976 Write completed with error (sct=0, sc=8) 00:29:53.976 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 [2024-10-12 22:18:12.001831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.977 NVMe io qpair process completion error 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 [2024-10-12 22:18:12.003080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 [2024-10-12 22:18:12.003931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.977 starting I/O failed: -6 00:29:53.977 starting I/O failed: -6 00:29:53.977 starting I/O failed: -6 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.977 starting I/O failed: -6 00:29:53.977 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 [2024-10-12 22:18:12.005095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 Write completed with error (sct=0, sc=8) 00:29:53.978 starting I/O failed: -6 00:29:53.978 [2024-10-12 22:18:12.008595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:53.978 NVMe io qpair process completion error 00:29:53.978 Initializing NVMe Controllers 00:29:53.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:53.978 Controller IO queue size 128, less than required. 00:29:53.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.978 Controller IO queue size 128, less than required. 00:29:53.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:53.978 Controller IO queue size 128, less than required. 00:29:53.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:53.979 Controller IO queue size 128, less than required. 00:29:53.979 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:53.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:53.979 Initialization complete. Launching workers. 00:29:53.979 ======================================================== 00:29:53.979 Latency(us) 00:29:53.979 Device Information : IOPS MiB/s Average min max 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1930.70 82.96 66315.16 688.35 119099.09 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1895.83 81.46 67568.99 875.78 128019.00 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1905.95 81.90 67232.04 693.11 126193.65 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1838.37 78.99 69726.27 824.74 124767.95 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1884.21 80.96 68066.92 731.85 125647.44 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1897.34 81.53 67618.68 737.58 127176.11 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1905.52 81.88 67376.41 870.20 132650.22 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1927.47 82.82 66635.13 622.06 118621.88 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1880.98 80.82 67570.12 883.92 127748.86 00:29:53.979 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1878.61 80.72 67676.40 642.81 119536.28 00:29:53.979 ======================================================== 00:29:53.979 Total : 18944.98 814.04 67567.53 622.06 132650.22 00:29:53.979 00:29:53.979 [2024-10-12 22:18:12.014207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120f820 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1210c40 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211020 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120ed00 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120fb50 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211350 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120f4f0 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211680 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120eb20 is same with the state(6) to be set 00:29:53.979 [2024-10-12 22:18:12.014494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120f1c0 is same with the state(6) to be set 00:29:53.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:53.979 22:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@161 -- # nvmfpid= 00:29:53.979 22:18:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@164 -- # sleep 1 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # wait 3647475 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@165 -- # true 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@166 -- # stoptarget 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.921 rmmod nvme_tcp 00:29:54.921 rmmod nvme_fabrics 00:29:54.921 rmmod nvme_keyring 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.921 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.922 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.922 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.922 22:18:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.467 00:29:57.467 real 0m10.272s 00:29:57.467 user 0m27.849s 00:29:57.467 sys 0m4.115s 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:57.467 ************************************ 00:29:57.467 END TEST nvmf_shutdown_tc4 00:29:57.467 ************************************ 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@177 -- # trap - SIGINT SIGTERM EXIT 00:29:57.467 00:29:57.467 real 0m43.382s 00:29:57.467 user 1m45.316s 00:29:57.467 sys 0m14.054s 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:57.467 ************************************ 00:29:57.467 END TEST nvmf_shutdown 00:29:57.467 ************************************ 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:57.467 00:29:57.467 real 19m34.494s 00:29:57.467 user 51m33.827s 00:29:57.467 sys 4m50.196s 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.467 22:18:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:57.467 ************************************ 00:29:57.467 END TEST nvmf_target_extra 00:29:57.467 ************************************ 00:29:57.467 22:18:15 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:57.467 22:18:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.467 22:18:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.467 22:18:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.467 ************************************ 00:29:57.467 START TEST nvmf_host 00:29:57.467 ************************************ 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:57.467 * Looking for test storage... 00:29:57.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.467 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:57.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.468 --rc genhtml_branch_coverage=1 00:29:57.468 --rc genhtml_function_coverage=1 00:29:57.468 --rc genhtml_legend=1 00:29:57.468 --rc geninfo_all_blocks=1 00:29:57.468 --rc geninfo_unexecuted_blocks=1 00:29:57.468 00:29:57.468 ' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:57.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.468 --rc genhtml_branch_coverage=1 00:29:57.468 --rc genhtml_function_coverage=1 00:29:57.468 --rc genhtml_legend=1 00:29:57.468 --rc geninfo_all_blocks=1 00:29:57.468 --rc geninfo_unexecuted_blocks=1 00:29:57.468 00:29:57.468 ' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:57.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.468 --rc genhtml_branch_coverage=1 00:29:57.468 --rc genhtml_function_coverage=1 00:29:57.468 --rc genhtml_legend=1 00:29:57.468 --rc geninfo_all_blocks=1 00:29:57.468 --rc geninfo_unexecuted_blocks=1 00:29:57.468 00:29:57.468 ' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:57.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.468 --rc genhtml_branch_coverage=1 00:29:57.468 --rc genhtml_function_coverage=1 00:29:57.468 --rc genhtml_legend=1 00:29:57.468 --rc geninfo_all_blocks=1 00:29:57.468 --rc geninfo_unexecuted_blocks=1 00:29:57.468 00:29:57.468 ' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.468 ************************************ 00:29:57.468 START TEST nvmf_multicontroller 00:29:57.468 ************************************ 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:57.468 * Looking for test storage... 00:29:57.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:29:57.468 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:57.730 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:57.730 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.730 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.730 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.730 22:18:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.730 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:57.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.730 --rc genhtml_branch_coverage=1 00:29:57.730 --rc genhtml_function_coverage=1 00:29:57.730 --rc genhtml_legend=1 00:29:57.730 --rc geninfo_all_blocks=1 00:29:57.730 --rc geninfo_unexecuted_blocks=1 00:29:57.730 00:29:57.730 ' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:57.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.731 --rc genhtml_branch_coverage=1 00:29:57.731 --rc genhtml_function_coverage=1 00:29:57.731 --rc genhtml_legend=1 00:29:57.731 --rc geninfo_all_blocks=1 00:29:57.731 --rc geninfo_unexecuted_blocks=1 00:29:57.731 00:29:57.731 ' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:57.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.731 --rc genhtml_branch_coverage=1 00:29:57.731 --rc genhtml_function_coverage=1 00:29:57.731 --rc genhtml_legend=1 00:29:57.731 --rc geninfo_all_blocks=1 00:29:57.731 --rc geninfo_unexecuted_blocks=1 00:29:57.731 00:29:57.731 ' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:57.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.731 --rc genhtml_branch_coverage=1 00:29:57.731 --rc genhtml_function_coverage=1 00:29:57.731 --rc genhtml_legend=1 00:29:57.731 --rc geninfo_all_blocks=1 00:29:57.731 --rc geninfo_unexecuted_blocks=1 00:29:57.731 00:29:57.731 ' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.731 22:18:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:05.884 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:05.884 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:05.884 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:05.884 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:30:05.884 00:30:05.884 --- 10.0.0.2 ping statistics --- 00:30:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.884 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:30:05.884 00:30:05.884 --- 10.0.0.1 ping statistics --- 00:30:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.884 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.884 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=3653480 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 3653480 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3653480 ']' 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.885 22:18:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.885 [2024-10-12 22:18:23.608234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:05.885 [2024-10-12 22:18:23.608300] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.885 [2024-10-12 22:18:23.698187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:05.885 [2024-10-12 22:18:23.746146] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.885 [2024-10-12 22:18:23.746198] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.885 [2024-10-12 22:18:23.746206] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.885 [2024-10-12 22:18:23.746213] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.885 [2024-10-12 22:18:23.746221] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.885 [2024-10-12 22:18:23.746392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.885 [2024-10-12 22:18:23.746549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.885 [2024-10-12 22:18:23.746551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.146 [2024-10-12 22:18:24.487186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.146 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.146 Malloc0 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 [2024-10-12 22:18:24.561596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 [2024-10-12 22:18:24.573506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 Malloc1 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.147 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.408 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.408 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:06.408 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3653538 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3653538 /var/tmp/bdevperf.sock 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3653538 ']' 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:06.409 22:18:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 NVMe0n1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.351 1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 request: 00:30:07.351 { 00:30:07.351 "name": "NVMe0", 00:30:07.351 "trtype": "tcp", 00:30:07.351 "traddr": "10.0.0.2", 00:30:07.351 "adrfam": "ipv4", 00:30:07.351 "trsvcid": "4420", 00:30:07.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.351 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:07.351 "hostaddr": "10.0.0.1", 00:30:07.351 "prchk_reftag": false, 00:30:07.351 "prchk_guard": false, 00:30:07.351 "hdgst": false, 00:30:07.351 "ddgst": false, 00:30:07.351 "allow_unrecognized_csi": false, 00:30:07.351 "method": "bdev_nvme_attach_controller", 00:30:07.351 "req_id": 1 00:30:07.351 } 00:30:07.351 Got JSON-RPC error response 00:30:07.351 response: 00:30:07.351 { 00:30:07.351 "code": -114, 00:30:07.351 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:07.351 } 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 request: 00:30:07.351 { 00:30:07.351 "name": "NVMe0", 00:30:07.351 "trtype": "tcp", 00:30:07.351 "traddr": "10.0.0.2", 00:30:07.351 "adrfam": "ipv4", 00:30:07.351 "trsvcid": "4420", 00:30:07.351 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.351 "hostaddr": "10.0.0.1", 00:30:07.351 "prchk_reftag": false, 00:30:07.351 "prchk_guard": false, 00:30:07.351 "hdgst": false, 00:30:07.351 "ddgst": false, 00:30:07.351 "allow_unrecognized_csi": false, 00:30:07.351 "method": "bdev_nvme_attach_controller", 00:30:07.351 "req_id": 1 00:30:07.351 } 00:30:07.351 Got JSON-RPC error response 00:30:07.351 response: 00:30:07.351 { 00:30:07.351 "code": -114, 00:30:07.351 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:07.351 } 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.351 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.351 request: 00:30:07.351 { 00:30:07.351 "name": "NVMe0", 00:30:07.351 "trtype": "tcp", 00:30:07.351 "traddr": "10.0.0.2", 00:30:07.351 "adrfam": "ipv4", 00:30:07.351 "trsvcid": "4420", 00:30:07.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.352 "hostaddr": "10.0.0.1", 00:30:07.352 "prchk_reftag": false, 00:30:07.352 "prchk_guard": false, 00:30:07.352 "hdgst": false, 00:30:07.352 "ddgst": false, 00:30:07.352 "multipath": "disable", 00:30:07.352 "allow_unrecognized_csi": false, 00:30:07.352 "method": "bdev_nvme_attach_controller", 00:30:07.352 "req_id": 1 00:30:07.352 } 00:30:07.352 Got JSON-RPC error response 00:30:07.352 response: 00:30:07.352 { 00:30:07.352 "code": -114, 00:30:07.352 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:07.352 } 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.352 request: 00:30:07.352 { 00:30:07.352 "name": "NVMe0", 00:30:07.352 "trtype": "tcp", 00:30:07.352 "traddr": "10.0.0.2", 00:30:07.352 "adrfam": "ipv4", 00:30:07.352 "trsvcid": "4420", 00:30:07.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.352 "hostaddr": "10.0.0.1", 00:30:07.352 "prchk_reftag": false, 00:30:07.352 "prchk_guard": false, 00:30:07.352 "hdgst": false, 00:30:07.352 "ddgst": false, 00:30:07.352 "multipath": "failover", 00:30:07.352 "allow_unrecognized_csi": false, 00:30:07.352 "method": "bdev_nvme_attach_controller", 00:30:07.352 "req_id": 1 00:30:07.352 } 00:30:07.352 Got JSON-RPC error response 00:30:07.352 response: 00:30:07.352 { 00:30:07.352 "code": -114, 00:30:07.352 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:07.352 } 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.352 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.352 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.612 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:07.612 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.613 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:07.613 22:18:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:08.998 { 00:30:08.998 "results": [ 00:30:08.998 { 00:30:08.998 "job": "NVMe0n1", 00:30:08.998 "core_mask": "0x1", 00:30:08.998 "workload": "write", 00:30:08.998 "status": "finished", 00:30:08.998 "queue_depth": 128, 00:30:08.998 "io_size": 4096, 00:30:08.998 "runtime": 1.007281, 00:30:08.998 "iops": 29116.99912933928, 00:30:08.998 "mibps": 113.73827784898157, 00:30:08.998 "io_failed": 0, 00:30:08.998 "io_timeout": 0, 00:30:08.998 "avg_latency_us": 4386.673922283974, 00:30:08.998 "min_latency_us": 1911.4666666666667, 00:30:08.998 "max_latency_us": 7755.093333333333 00:30:08.998 } 00:30:08.999 ], 00:30:08.999 "core_count": 1 00:30:08.999 } 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3653538 ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3653538' 00:30:08.999 killing process with pid 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3653538 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:08.999 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:08.999 [2024-10-12 22:18:24.713951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:08.999 [2024-10-12 22:18:24.714035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653538 ] 00:30:08.999 [2024-10-12 22:18:24.797494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.999 [2024-10-12 22:18:24.844511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.999 [2024-10-12 22:18:25.956798] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 7559f4c0-2f2f-4254-a5cf-54709d8ced86 already exists 00:30:08.999 [2024-10-12 22:18:25.956829] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:7559f4c0-2f2f-4254-a5cf-54709d8ced86 alias for bdev NVMe1n1 00:30:08.999 [2024-10-12 22:18:25.956838] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:08.999 Running I/O for 1 seconds... 00:30:08.999 29107.00 IOPS, 113.70 MiB/s 00:30:08.999 Latency(us) 00:30:08.999 [2024-10-12T20:18:27.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.999 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:08.999 NVMe0n1 : 1.01 29117.00 113.74 0.00 0.00 4386.67 1911.47 7755.09 00:30:08.999 [2024-10-12T20:18:27.488Z] =================================================================================================================== 00:30:08.999 [2024-10-12T20:18:27.488Z] Total : 29117.00 113.74 0.00 0.00 4386.67 1911.47 7755.09 00:30:08.999 Received shutdown signal, test time was about 1.000000 seconds 00:30:08.999 00:30:08.999 Latency(us) 00:30:08.999 [2024-10-12T20:18:27.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.999 [2024-10-12T20:18:27.488Z] =================================================================================================================== 00:30:08.999 [2024-10-12T20:18:27.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.999 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.999 rmmod nvme_tcp 00:30:08.999 rmmod nvme_fabrics 00:30:08.999 rmmod nvme_keyring 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 3653480 ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 3653480 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3653480 ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3653480 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3653480 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3653480' 00:30:08.999 killing process with pid 3653480 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3653480 00:30:08.999 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3653480 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.260 22:18:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.806 00:30:11.806 real 0m13.887s 00:30:11.806 user 0m16.749s 00:30:11.806 sys 0m6.488s 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:11.806 ************************************ 00:30:11.806 END TEST nvmf_multicontroller 00:30:11.806 ************************************ 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.806 ************************************ 00:30:11.806 START TEST nvmf_aer 00:30:11.806 ************************************ 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:11.806 * Looking for test storage... 00:30:11.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.806 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:11.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.807 --rc genhtml_branch_coverage=1 00:30:11.807 --rc genhtml_function_coverage=1 00:30:11.807 --rc genhtml_legend=1 00:30:11.807 --rc geninfo_all_blocks=1 00:30:11.807 --rc geninfo_unexecuted_blocks=1 00:30:11.807 00:30:11.807 ' 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:11.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.807 --rc genhtml_branch_coverage=1 00:30:11.807 --rc genhtml_function_coverage=1 00:30:11.807 --rc genhtml_legend=1 00:30:11.807 --rc geninfo_all_blocks=1 00:30:11.807 --rc geninfo_unexecuted_blocks=1 00:30:11.807 00:30:11.807 ' 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:11.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.807 --rc genhtml_branch_coverage=1 00:30:11.807 --rc genhtml_function_coverage=1 00:30:11.807 --rc genhtml_legend=1 00:30:11.807 --rc geninfo_all_blocks=1 00:30:11.807 --rc geninfo_unexecuted_blocks=1 00:30:11.807 00:30:11.807 ' 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:11.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.807 --rc genhtml_branch_coverage=1 00:30:11.807 --rc genhtml_function_coverage=1 00:30:11.807 --rc genhtml_legend=1 00:30:11.807 --rc geninfo_all_blocks=1 00:30:11.807 --rc geninfo_unexecuted_blocks=1 00:30:11.807 00:30:11.807 ' 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.807 22:18:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:11.807 22:18:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:20.052 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:20.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:20.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:20.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:20.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:30:20.053 00:30:20.053 --- 10.0.0.2 ping statistics --- 00:30:20.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.053 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:30:20.053 00:30:20.053 --- 10.0.0.1 ping statistics --- 00:30:20.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.053 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=3658284 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 3658284 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3658284 ']' 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.053 22:18:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.053 [2024-10-12 22:18:37.593120] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:20.053 [2024-10-12 22:18:37.593191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.053 [2024-10-12 22:18:37.682698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.053 [2024-10-12 22:18:37.736071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.053 [2024-10-12 22:18:37.736145] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.053 [2024-10-12 22:18:37.736154] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.053 [2024-10-12 22:18:37.736162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.053 [2024-10-12 22:18:37.736168] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.053 [2024-10-12 22:18:37.736265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.053 [2024-10-12 22:18:37.736426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.053 [2024-10-12 22:18:37.736590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.053 [2024-10-12 22:18:37.736590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.053 [2024-10-12 22:18:38.460626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.053 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.054 Malloc0 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.054 [2024-10-12 22:18:38.526272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.054 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.315 [ 00:30:20.315 { 00:30:20.315 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:20.315 "subtype": "Discovery", 00:30:20.315 "listen_addresses": [], 00:30:20.315 "allow_any_host": true, 00:30:20.315 "hosts": [] 00:30:20.315 }, 00:30:20.315 { 00:30:20.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.315 "subtype": "NVMe", 00:30:20.315 "listen_addresses": [ 00:30:20.315 { 00:30:20.315 "trtype": "TCP", 00:30:20.315 "adrfam": "IPv4", 00:30:20.315 "traddr": "10.0.0.2", 00:30:20.315 "trsvcid": "4420" 00:30:20.315 } 00:30:20.315 ], 00:30:20.315 "allow_any_host": true, 00:30:20.315 "hosts": [], 00:30:20.315 "serial_number": "SPDK00000000000001", 00:30:20.315 "model_number": "SPDK bdev Controller", 00:30:20.315 "max_namespaces": 2, 00:30:20.315 "min_cntlid": 1, 00:30:20.315 "max_cntlid": 65519, 00:30:20.315 "namespaces": [ 00:30:20.315 { 00:30:20.315 "nsid": 1, 00:30:20.315 "bdev_name": "Malloc0", 00:30:20.315 "name": "Malloc0", 00:30:20.315 "nguid": "C9394A0102C14EC09E1174A6637438FF", 00:30:20.315 "uuid": "c9394a01-02c1-4ec0-9e11-74a6637438ff" 00:30:20.315 } 00:30:20.315 ] 00:30:20.315 } 00:30:20.315 ] 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3658559 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:20.315 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.576 Malloc1 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:20.576 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.577 Asynchronous Event Request test 00:30:20.577 Attaching to 10.0.0.2 00:30:20.577 Attached to 10.0.0.2 00:30:20.577 Registering asynchronous event callbacks... 00:30:20.577 Starting namespace attribute notice tests for all controllers... 00:30:20.577 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:20.577 aer_cb - Changed Namespace 00:30:20.577 Cleaning up... 00:30:20.577 [ 00:30:20.577 { 00:30:20.577 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:20.577 "subtype": "Discovery", 00:30:20.577 "listen_addresses": [], 00:30:20.577 "allow_any_host": true, 00:30:20.577 "hosts": [] 00:30:20.577 }, 00:30:20.577 { 00:30:20.577 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.577 "subtype": "NVMe", 00:30:20.577 "listen_addresses": [ 00:30:20.577 { 00:30:20.577 "trtype": "TCP", 00:30:20.577 "adrfam": "IPv4", 00:30:20.577 "traddr": "10.0.0.2", 00:30:20.577 "trsvcid": "4420" 00:30:20.577 } 00:30:20.577 ], 00:30:20.577 "allow_any_host": true, 00:30:20.577 "hosts": [], 00:30:20.577 "serial_number": "SPDK00000000000001", 00:30:20.577 "model_number": "SPDK bdev Controller", 00:30:20.577 "max_namespaces": 2, 00:30:20.577 "min_cntlid": 1, 00:30:20.577 "max_cntlid": 65519, 00:30:20.577 "namespaces": [ 00:30:20.577 { 00:30:20.577 "nsid": 1, 00:30:20.577 "bdev_name": "Malloc0", 00:30:20.577 "name": "Malloc0", 00:30:20.577 "nguid": "C9394A0102C14EC09E1174A6637438FF", 00:30:20.577 "uuid": "c9394a01-02c1-4ec0-9e11-74a6637438ff" 00:30:20.577 }, 00:30:20.577 { 00:30:20.577 "nsid": 2, 00:30:20.577 "bdev_name": "Malloc1", 00:30:20.577 "name": "Malloc1", 00:30:20.577 "nguid": "C3BB1AE2B9F7445CA336F305C55E5267", 00:30:20.577 "uuid": "c3bb1ae2-b9f7-445c-a336-f305c55e5267" 00:30:20.577 } 00:30:20.577 ] 00:30:20.577 } 00:30:20.577 ] 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3658559 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.577 22:18:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.577 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.577 rmmod nvme_tcp 00:30:20.577 rmmod nvme_fabrics 00:30:20.838 rmmod nvme_keyring 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 3658284 ']' 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 3658284 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3658284 ']' 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3658284 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3658284 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3658284' 00:30:20.838 killing process with pid 3658284 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3658284 00:30:20.838 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3658284 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.099 22:18:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.013 00:30:23.013 real 0m11.645s 00:30:23.013 user 0m8.643s 00:30:23.013 sys 0m6.125s 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:23.013 ************************************ 00:30:23.013 END TEST nvmf_aer 00:30:23.013 ************************************ 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.013 22:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.275 ************************************ 00:30:23.275 START TEST nvmf_async_init 00:30:23.275 ************************************ 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:23.275 * Looking for test storage... 00:30:23.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.275 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:23.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.276 --rc genhtml_branch_coverage=1 00:30:23.276 --rc genhtml_function_coverage=1 00:30:23.276 --rc genhtml_legend=1 00:30:23.276 --rc geninfo_all_blocks=1 00:30:23.276 --rc geninfo_unexecuted_blocks=1 00:30:23.276 00:30:23.276 ' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:23.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.276 --rc genhtml_branch_coverage=1 00:30:23.276 --rc genhtml_function_coverage=1 00:30:23.276 --rc genhtml_legend=1 00:30:23.276 --rc geninfo_all_blocks=1 00:30:23.276 --rc geninfo_unexecuted_blocks=1 00:30:23.276 00:30:23.276 ' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:23.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.276 --rc genhtml_branch_coverage=1 00:30:23.276 --rc genhtml_function_coverage=1 00:30:23.276 --rc genhtml_legend=1 00:30:23.276 --rc geninfo_all_blocks=1 00:30:23.276 --rc geninfo_unexecuted_blocks=1 00:30:23.276 00:30:23.276 ' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:23.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.276 --rc genhtml_branch_coverage=1 00:30:23.276 --rc genhtml_function_coverage=1 00:30:23.276 --rc genhtml_legend=1 00:30:23.276 --rc geninfo_all_blocks=1 00:30:23.276 --rc geninfo_unexecuted_blocks=1 00:30:23.276 00:30:23.276 ' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4ca94accfa0740f6857152d0256b0e41 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.276 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.537 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:23.537 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:23.537 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.537 22:18:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:31.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:31.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:31.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:31.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.685 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.686 22:18:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:31.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:31.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:30:31.686 00:30:31.686 --- 10.0.0.2 ping statistics --- 00:30:31.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.686 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:31.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:31.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:30:31.686 00:30:31.686 --- 10.0.0.1 ping statistics --- 00:30:31.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:31.686 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=3662890 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 3662890 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3662890 ']' 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 [2024-10-12 22:18:49.332170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:31.686 [2024-10-12 22:18:49.332237] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.686 [2024-10-12 22:18:49.408927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.686 [2024-10-12 22:18:49.470532] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.686 [2024-10-12 22:18:49.470603] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.686 [2024-10-12 22:18:49.470614] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.686 [2024-10-12 22:18:49.470623] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.686 [2024-10-12 22:18:49.470630] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.686 [2024-10-12 22:18:49.470670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 [2024-10-12 22:18:49.612402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 null0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4ca94accfa0740f6857152d0256b0e41 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 [2024-10-12 22:18:49.672741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 nvme0n1 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.686 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.686 [ 00:30:31.686 { 00:30:31.686 "name": "nvme0n1", 00:30:31.686 "aliases": [ 00:30:31.686 "4ca94acc-fa07-40f6-8571-52d0256b0e41" 00:30:31.686 ], 00:30:31.686 "product_name": "NVMe disk", 00:30:31.686 "block_size": 512, 00:30:31.686 "num_blocks": 2097152, 00:30:31.686 "uuid": "4ca94acc-fa07-40f6-8571-52d0256b0e41", 00:30:31.686 "numa_id": 0, 00:30:31.686 "assigned_rate_limits": { 00:30:31.686 "rw_ios_per_sec": 0, 00:30:31.686 "rw_mbytes_per_sec": 0, 00:30:31.686 "r_mbytes_per_sec": 0, 00:30:31.686 "w_mbytes_per_sec": 0 00:30:31.686 }, 00:30:31.686 "claimed": false, 00:30:31.686 "zoned": false, 00:30:31.686 "supported_io_types": { 00:30:31.686 "read": true, 00:30:31.686 "write": true, 00:30:31.686 "unmap": false, 00:30:31.686 "flush": true, 00:30:31.686 "reset": true, 00:30:31.686 "nvme_admin": true, 00:30:31.686 "nvme_io": true, 00:30:31.686 "nvme_io_md": false, 00:30:31.686 "write_zeroes": true, 00:30:31.686 "zcopy": false, 00:30:31.686 "get_zone_info": false, 00:30:31.686 "zone_management": false, 00:30:31.686 "zone_append": false, 00:30:31.686 "compare": true, 00:30:31.686 "compare_and_write": true, 00:30:31.686 "abort": true, 00:30:31.686 "seek_hole": false, 00:30:31.686 "seek_data": false, 00:30:31.686 "copy": true, 00:30:31.686 "nvme_iov_md": false 00:30:31.687 }, 00:30:31.687 "memory_domains": [ 00:30:31.687 { 00:30:31.687 "dma_device_id": "system", 00:30:31.687 "dma_device_type": 1 00:30:31.687 } 00:30:31.687 ], 00:30:31.687 "driver_specific": { 00:30:31.687 "nvme": [ 00:30:31.687 { 00:30:31.687 "trid": { 00:30:31.687 "trtype": "TCP", 00:30:31.687 "adrfam": "IPv4", 00:30:31.687 "traddr": "10.0.0.2", 00:30:31.687 "trsvcid": "4420", 00:30:31.687 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:31.687 }, 00:30:31.687 "ctrlr_data": { 00:30:31.687 "cntlid": 1, 00:30:31.687 "vendor_id": "0x8086", 00:30:31.687 "model_number": "SPDK bdev Controller", 00:30:31.687 "serial_number": "00000000000000000000", 00:30:31.687 "firmware_revision": "24.09.1", 00:30:31.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.687 "oacs": { 00:30:31.687 "security": 0, 00:30:31.687 "format": 0, 00:30:31.687 "firmware": 0, 00:30:31.687 "ns_manage": 0 00:30:31.687 }, 00:30:31.687 "multi_ctrlr": true, 00:30:31.687 "ana_reporting": false 00:30:31.687 }, 00:30:31.687 "vs": { 00:30:31.687 "nvme_version": "1.3" 00:30:31.687 }, 00:30:31.687 "ns_data": { 00:30:31.687 "id": 1, 00:30:31.687 "can_share": true 00:30:31.687 } 00:30:31.687 } 00:30:31.687 ], 00:30:31.687 "mp_policy": "active_passive" 00:30:31.687 } 00:30:31.687 } 00:30:31.687 ] 00:30:31.687 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:31.687 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 [2024-10-12 22:18:49.946822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:31.687 [2024-10-12 22:18:49.946895] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258a8a0 (9): Bad file descriptor 00:30:31.687 [2024-10-12 22:18:50.080227] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 [ 00:30:31.687 { 00:30:31.687 "name": "nvme0n1", 00:30:31.687 "aliases": [ 00:30:31.687 "4ca94acc-fa07-40f6-8571-52d0256b0e41" 00:30:31.687 ], 00:30:31.687 "product_name": "NVMe disk", 00:30:31.687 "block_size": 512, 00:30:31.687 "num_blocks": 2097152, 00:30:31.687 "uuid": "4ca94acc-fa07-40f6-8571-52d0256b0e41", 00:30:31.687 "numa_id": 0, 00:30:31.687 "assigned_rate_limits": { 00:30:31.687 "rw_ios_per_sec": 0, 00:30:31.687 "rw_mbytes_per_sec": 0, 00:30:31.687 "r_mbytes_per_sec": 0, 00:30:31.687 "w_mbytes_per_sec": 0 00:30:31.687 }, 00:30:31.687 "claimed": false, 00:30:31.687 "zoned": false, 00:30:31.687 "supported_io_types": { 00:30:31.687 "read": true, 00:30:31.687 "write": true, 00:30:31.687 "unmap": false, 00:30:31.687 "flush": true, 00:30:31.687 "reset": true, 00:30:31.687 "nvme_admin": true, 00:30:31.687 "nvme_io": true, 00:30:31.687 "nvme_io_md": false, 00:30:31.687 "write_zeroes": true, 00:30:31.687 "zcopy": false, 00:30:31.687 "get_zone_info": false, 00:30:31.687 "zone_management": false, 00:30:31.687 "zone_append": false, 00:30:31.687 "compare": true, 00:30:31.687 "compare_and_write": true, 00:30:31.687 "abort": true, 00:30:31.687 "seek_hole": false, 00:30:31.687 "seek_data": false, 00:30:31.687 "copy": true, 00:30:31.687 "nvme_iov_md": false 00:30:31.687 }, 00:30:31.687 "memory_domains": [ 00:30:31.687 { 00:30:31.687 "dma_device_id": "system", 00:30:31.687 "dma_device_type": 1 00:30:31.687 } 00:30:31.687 ], 00:30:31.687 "driver_specific": { 00:30:31.687 "nvme": [ 00:30:31.687 { 00:30:31.687 "trid": { 00:30:31.687 "trtype": "TCP", 00:30:31.687 "adrfam": "IPv4", 00:30:31.687 "traddr": "10.0.0.2", 00:30:31.687 "trsvcid": "4420", 00:30:31.687 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:31.687 }, 00:30:31.687 "ctrlr_data": { 00:30:31.687 "cntlid": 2, 00:30:31.687 "vendor_id": "0x8086", 00:30:31.687 "model_number": "SPDK bdev Controller", 00:30:31.687 "serial_number": "00000000000000000000", 00:30:31.687 "firmware_revision": "24.09.1", 00:30:31.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.687 "oacs": { 00:30:31.687 "security": 0, 00:30:31.687 "format": 0, 00:30:31.687 "firmware": 0, 00:30:31.687 "ns_manage": 0 00:30:31.687 }, 00:30:31.687 "multi_ctrlr": true, 00:30:31.687 "ana_reporting": false 00:30:31.687 }, 00:30:31.687 "vs": { 00:30:31.687 "nvme_version": "1.3" 00:30:31.687 }, 00:30:31.687 "ns_data": { 00:30:31.687 "id": 1, 00:30:31.687 "can_share": true 00:30:31.687 } 00:30:31.687 } 00:30:31.687 ], 00:30:31.687 "mp_policy": "active_passive" 00:30:31.687 } 00:30:31.687 } 00:30:31.687 ] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XzzqhuinQo 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XzzqhuinQo 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.XzzqhuinQo 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.687 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.687 [2024-10-12 22:18:50.171619] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:31.949 [2024-10-12 22:18:50.171858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.950 [2024-10-12 22:18:50.195680] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:31.950 nvme0n1 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.950 [ 00:30:31.950 { 00:30:31.950 "name": "nvme0n1", 00:30:31.950 "aliases": [ 00:30:31.950 "4ca94acc-fa07-40f6-8571-52d0256b0e41" 00:30:31.950 ], 00:30:31.950 "product_name": "NVMe disk", 00:30:31.950 "block_size": 512, 00:30:31.950 "num_blocks": 2097152, 00:30:31.950 "uuid": "4ca94acc-fa07-40f6-8571-52d0256b0e41", 00:30:31.950 "numa_id": 0, 00:30:31.950 "assigned_rate_limits": { 00:30:31.950 "rw_ios_per_sec": 0, 00:30:31.950 "rw_mbytes_per_sec": 0, 00:30:31.950 "r_mbytes_per_sec": 0, 00:30:31.950 "w_mbytes_per_sec": 0 00:30:31.950 }, 00:30:31.950 "claimed": false, 00:30:31.950 "zoned": false, 00:30:31.950 "supported_io_types": { 00:30:31.950 "read": true, 00:30:31.950 "write": true, 00:30:31.950 "unmap": false, 00:30:31.950 "flush": true, 00:30:31.950 "reset": true, 00:30:31.950 "nvme_admin": true, 00:30:31.950 "nvme_io": true, 00:30:31.950 "nvme_io_md": false, 00:30:31.950 "write_zeroes": true, 00:30:31.950 "zcopy": false, 00:30:31.950 "get_zone_info": false, 00:30:31.950 "zone_management": false, 00:30:31.950 "zone_append": false, 00:30:31.950 "compare": true, 00:30:31.950 "compare_and_write": true, 00:30:31.950 "abort": true, 00:30:31.950 "seek_hole": false, 00:30:31.950 "seek_data": false, 00:30:31.950 "copy": true, 00:30:31.950 "nvme_iov_md": false 00:30:31.950 }, 00:30:31.950 "memory_domains": [ 00:30:31.950 { 00:30:31.950 "dma_device_id": "system", 00:30:31.950 "dma_device_type": 1 00:30:31.950 } 00:30:31.950 ], 00:30:31.950 "driver_specific": { 00:30:31.950 "nvme": [ 00:30:31.950 { 00:30:31.950 "trid": { 00:30:31.950 "trtype": "TCP", 00:30:31.950 "adrfam": "IPv4", 00:30:31.950 "traddr": "10.0.0.2", 00:30:31.950 "trsvcid": "4421", 00:30:31.950 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:31.950 }, 00:30:31.950 "ctrlr_data": { 00:30:31.950 "cntlid": 3, 00:30:31.950 "vendor_id": "0x8086", 00:30:31.950 "model_number": "SPDK bdev Controller", 00:30:31.950 "serial_number": "00000000000000000000", 00:30:31.950 "firmware_revision": "24.09.1", 00:30:31.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.950 "oacs": { 00:30:31.950 "security": 0, 00:30:31.950 "format": 0, 00:30:31.950 "firmware": 0, 00:30:31.950 "ns_manage": 0 00:30:31.950 }, 00:30:31.950 "multi_ctrlr": true, 00:30:31.950 "ana_reporting": false 00:30:31.950 }, 00:30:31.950 "vs": { 00:30:31.950 "nvme_version": "1.3" 00:30:31.950 }, 00:30:31.950 "ns_data": { 00:30:31.950 "id": 1, 00:30:31.950 "can_share": true 00:30:31.950 } 00:30:31.950 } 00:30:31.950 ], 00:30:31.950 "mp_policy": "active_passive" 00:30:31.950 } 00:30:31.950 } 00:30:31.950 ] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.XzzqhuinQo 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.950 rmmod nvme_tcp 00:30:31.950 rmmod nvme_fabrics 00:30:31.950 rmmod nvme_keyring 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 3662890 ']' 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 3662890 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3662890 ']' 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3662890 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.950 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3662890 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3662890' 00:30:32.212 killing process with pid 3662890 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3662890 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3662890 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.212 22:18:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.761 00:30:34.761 real 0m11.193s 00:30:34.761 user 0m3.581s 00:30:34.761 sys 0m6.074s 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:34.761 ************************************ 00:30:34.761 END TEST nvmf_async_init 00:30:34.761 ************************************ 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.761 ************************************ 00:30:34.761 START TEST dma 00:30:34.761 ************************************ 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:34.761 * Looking for test storage... 00:30:34.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:34.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.761 --rc genhtml_branch_coverage=1 00:30:34.761 --rc genhtml_function_coverage=1 00:30:34.761 --rc genhtml_legend=1 00:30:34.761 --rc geninfo_all_blocks=1 00:30:34.761 --rc geninfo_unexecuted_blocks=1 00:30:34.761 00:30:34.761 ' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:34.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.761 --rc genhtml_branch_coverage=1 00:30:34.761 --rc genhtml_function_coverage=1 00:30:34.761 --rc genhtml_legend=1 00:30:34.761 --rc geninfo_all_blocks=1 00:30:34.761 --rc geninfo_unexecuted_blocks=1 00:30:34.761 00:30:34.761 ' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:34.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.761 --rc genhtml_branch_coverage=1 00:30:34.761 --rc genhtml_function_coverage=1 00:30:34.761 --rc genhtml_legend=1 00:30:34.761 --rc geninfo_all_blocks=1 00:30:34.761 --rc geninfo_unexecuted_blocks=1 00:30:34.761 00:30:34.761 ' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:34.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.761 --rc genhtml_branch_coverage=1 00:30:34.761 --rc genhtml_function_coverage=1 00:30:34.761 --rc genhtml_legend=1 00:30:34.761 --rc geninfo_all_blocks=1 00:30:34.761 --rc geninfo_unexecuted_blocks=1 00:30:34.761 00:30:34.761 ' 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.761 22:18:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.761 22:18:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:34.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:34.762 00:30:34.762 real 0m0.232s 00:30:34.762 user 0m0.144s 00:30:34.762 sys 0m0.104s 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:34.762 ************************************ 00:30:34.762 END TEST dma 00:30:34.762 ************************************ 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.762 ************************************ 00:30:34.762 START TEST nvmf_identify 00:30:34.762 ************************************ 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:34.762 * Looking for test storage... 00:30:34.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:30:34.762 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:35.023 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:35.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.024 --rc genhtml_branch_coverage=1 00:30:35.024 --rc genhtml_function_coverage=1 00:30:35.024 --rc genhtml_legend=1 00:30:35.024 --rc geninfo_all_blocks=1 00:30:35.024 --rc geninfo_unexecuted_blocks=1 00:30:35.024 00:30:35.024 ' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:35.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.024 --rc genhtml_branch_coverage=1 00:30:35.024 --rc genhtml_function_coverage=1 00:30:35.024 --rc genhtml_legend=1 00:30:35.024 --rc geninfo_all_blocks=1 00:30:35.024 --rc geninfo_unexecuted_blocks=1 00:30:35.024 00:30:35.024 ' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:35.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.024 --rc genhtml_branch_coverage=1 00:30:35.024 --rc genhtml_function_coverage=1 00:30:35.024 --rc genhtml_legend=1 00:30:35.024 --rc geninfo_all_blocks=1 00:30:35.024 --rc geninfo_unexecuted_blocks=1 00:30:35.024 00:30:35.024 ' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:35.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.024 --rc genhtml_branch_coverage=1 00:30:35.024 --rc genhtml_function_coverage=1 00:30:35.024 --rc genhtml_legend=1 00:30:35.024 --rc geninfo_all_blocks=1 00:30:35.024 --rc geninfo_unexecuted_blocks=1 00:30:35.024 00:30:35.024 ' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.024 22:18:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:43.167 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:43.167 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:43.168 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:43.168 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:43.168 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:30:43.168 00:30:43.168 --- 10.0.0.2 ping statistics --- 00:30:43.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.168 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:30:43.168 00:30:43.168 --- 10.0.0.1 ping statistics --- 00:30:43.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.168 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3667296 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3667296 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3667296 ']' 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.168 22:19:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.168 [2024-10-12 22:19:00.915077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:43.168 [2024-10-12 22:19:00.915151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.168 [2024-10-12 22:19:01.006169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.168 [2024-10-12 22:19:01.055972] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.168 [2024-10-12 22:19:01.056027] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.168 [2024-10-12 22:19:01.056037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.168 [2024-10-12 22:19:01.056044] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.168 [2024-10-12 22:19:01.056050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.168 [2024-10-12 22:19:01.056150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.168 [2024-10-12 22:19:01.056231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.168 [2024-10-12 22:19:01.056388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.168 [2024-10-12 22:19:01.056389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 [2024-10-12 22:19:01.747204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 Malloc0 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 [2024-10-12 22:19:01.856986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.431 [ 00:30:43.431 { 00:30:43.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.431 "subtype": "Discovery", 00:30:43.431 "listen_addresses": [ 00:30:43.431 { 00:30:43.431 "trtype": "TCP", 00:30:43.431 "adrfam": "IPv4", 00:30:43.431 "traddr": "10.0.0.2", 00:30:43.431 "trsvcid": "4420" 00:30:43.431 } 00:30:43.431 ], 00:30:43.431 "allow_any_host": true, 00:30:43.431 "hosts": [] 00:30:43.431 }, 00:30:43.431 { 00:30:43.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.431 "subtype": "NVMe", 00:30:43.431 "listen_addresses": [ 00:30:43.431 { 00:30:43.431 "trtype": "TCP", 00:30:43.431 "adrfam": "IPv4", 00:30:43.431 "traddr": "10.0.0.2", 00:30:43.431 "trsvcid": "4420" 00:30:43.431 } 00:30:43.431 ], 00:30:43.431 "allow_any_host": true, 00:30:43.431 "hosts": [], 00:30:43.431 "serial_number": "SPDK00000000000001", 00:30:43.431 "model_number": "SPDK bdev Controller", 00:30:43.431 "max_namespaces": 32, 00:30:43.431 "min_cntlid": 1, 00:30:43.431 "max_cntlid": 65519, 00:30:43.431 "namespaces": [ 00:30:43.431 { 00:30:43.431 "nsid": 1, 00:30:43.431 "bdev_name": "Malloc0", 00:30:43.431 "name": "Malloc0", 00:30:43.431 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:43.431 "eui64": "ABCDEF0123456789", 00:30:43.431 "uuid": "74bcae42-597d-461c-988d-b377b105a2fe" 00:30:43.431 } 00:30:43.431 ] 00:30:43.431 } 00:30:43.431 ] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.431 22:19:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:43.695 [2024-10-12 22:19:01.919900] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:43.695 [2024-10-12 22:19:01.919950] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667644 ] 00:30:43.695 [2024-10-12 22:19:01.958252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:43.695 [2024-10-12 22:19:01.958322] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:43.695 [2024-10-12 22:19:01.958328] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:43.695 [2024-10-12 22:19:01.958344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:43.695 [2024-10-12 22:19:01.958357] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:43.695 [2024-10-12 22:19:01.959351] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:43.695 [2024-10-12 22:19:01.959399] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x24250d0 0 00:30:43.695 [2024-10-12 22:19:01.973118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:43.695 [2024-10-12 22:19:01.973136] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:43.695 [2024-10-12 22:19:01.973142] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:43.695 [2024-10-12 22:19:01.973146] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:43.695 [2024-10-12 22:19:01.973188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.695 [2024-10-12 22:19:01.973194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.695 [2024-10-12 22:19:01.973199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.695 [2024-10-12 22:19:01.973223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:43.695 [2024-10-12 22:19:01.973248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.695 [2024-10-12 22:19:01.981118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.695 [2024-10-12 22:19:01.981127] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.695 [2024-10-12 22:19:01.981131] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.695 [2024-10-12 22:19:01.981136] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.981148] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:43.696 [2024-10-12 22:19:01.981157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:43.696 [2024-10-12 22:19:01.981162] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:43.696 [2024-10-12 22:19:01.981181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.981198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.981214] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.981449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.981455] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.981459] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.981469] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:43.696 [2024-10-12 22:19:01.981476] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:43.696 [2024-10-12 22:19:01.981483] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.981497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.981508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.981678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.981684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.981688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.981697] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:43.696 [2024-10-12 22:19:01.981706] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.981712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981720] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.981730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.981740] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.981918] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.981924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.981927] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981931] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.981937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.981946] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981950] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.981953] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.981960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.981970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.982172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.982179] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.982182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.982191] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:43.696 [2024-10-12 22:19:01.982196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.982204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.982309] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:43.696 [2024-10-12 22:19:01.982314] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.982325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982329] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.982339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.982350] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.982553] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.982559] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.982562] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982566] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.982571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:43.696 [2024-10-12 22:19:01.982580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982584] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982590] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.982597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.982607] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.982797] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:01.982803] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:01.982806] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982810] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:01.982815] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:43.696 [2024-10-12 22:19:01.982820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:43.696 [2024-10-12 22:19:01.982828] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:43.696 [2024-10-12 22:19:01.982836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:43.696 [2024-10-12 22:19:01.982846] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.982850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:01.982857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.696 [2024-10-12 22:19:01.982868] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:01.983122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.696 [2024-10-12 22:19:01.983129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.696 [2024-10-12 22:19:01.983133] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.983137] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24250d0): datao=0, datal=4096, cccid=0 00:30:43.696 [2024-10-12 22:19:01.983142] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248f540) on tqpair(0x24250d0): expected_datao=0, payload_size=4096 00:30:43.696 [2024-10-12 22:19:01.983147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.983171] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:01.983176] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:02.027114] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:02.027125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:02.027129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:02.027133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.696 [2024-10-12 22:19:02.027143] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:43.696 [2024-10-12 22:19:02.027149] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:43.696 [2024-10-12 22:19:02.027153] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:43.696 [2024-10-12 22:19:02.027158] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:43.696 [2024-10-12 22:19:02.027163] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:43.696 [2024-10-12 22:19:02.027172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:43.696 [2024-10-12 22:19:02.027182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:43.696 [2024-10-12 22:19:02.027189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:02.027194] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:02.027197] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.696 [2024-10-12 22:19:02.027206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.696 [2024-10-12 22:19:02.027220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.696 [2024-10-12 22:19:02.027429] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.696 [2024-10-12 22:19:02.027435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.696 [2024-10-12 22:19:02.027438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.696 [2024-10-12 22:19:02.027442] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.027452] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027455] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027459] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.697 [2024-10-12 22:19:02.027472] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027479] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.697 [2024-10-12 22:19:02.027491] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.697 [2024-10-12 22:19:02.027510] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.697 [2024-10-12 22:19:02.027528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:43.697 [2024-10-12 22:19:02.027542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:43.697 [2024-10-12 22:19:02.027549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.697 [2024-10-12 22:19:02.027571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f540, cid 0, qid 0 00:30:43.697 [2024-10-12 22:19:02.027583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f6c0, cid 1, qid 0 00:30:43.697 [2024-10-12 22:19:02.027588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f840, cid 2, qid 0 00:30:43.697 [2024-10-12 22:19:02.027592] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.697 [2024-10-12 22:19:02.027597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fb40, cid 4, qid 0 00:30:43.697 [2024-10-12 22:19:02.027843] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.697 [2024-10-12 22:19:02.027849] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.697 [2024-10-12 22:19:02.027853] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fb40) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.027862] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:43.697 [2024-10-12 22:19:02.027868] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:43.697 [2024-10-12 22:19:02.027879] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.027883] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.027890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.697 [2024-10-12 22:19:02.027900] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fb40, cid 4, qid 0 00:30:43.697 [2024-10-12 22:19:02.028122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.697 [2024-10-12 22:19:02.028130] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.697 [2024-10-12 22:19:02.028134] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028137] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24250d0): datao=0, datal=4096, cccid=4 00:30:43.697 [2024-10-12 22:19:02.028142] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248fb40) on tqpair(0x24250d0): expected_datao=0, payload_size=4096 00:30:43.697 [2024-10-12 22:19:02.028146] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028154] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028157] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.697 [2024-10-12 22:19:02.028325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.697 [2024-10-12 22:19:02.028329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fb40) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.028347] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:43.697 [2024-10-12 22:19:02.028379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028383] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.028390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.697 [2024-10-12 22:19:02.028397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028405] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.028411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.697 [2024-10-12 22:19:02.028427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fb40, cid 4, qid 0 00:30:43.697 [2024-10-12 22:19:02.028432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fcc0, cid 5, qid 0 00:30:43.697 [2024-10-12 22:19:02.028674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.697 [2024-10-12 22:19:02.028680] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.697 [2024-10-12 22:19:02.028684] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24250d0): datao=0, datal=1024, cccid=4 00:30:43.697 [2024-10-12 22:19:02.028692] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248fb40) on tqpair(0x24250d0): expected_datao=0, payload_size=1024 00:30:43.697 [2024-10-12 22:19:02.028696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028703] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028707] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.697 [2024-10-12 22:19:02.028718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.697 [2024-10-12 22:19:02.028721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.028725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fcc0) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.070333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.697 [2024-10-12 22:19:02.070347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.697 [2024-10-12 22:19:02.070350] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.070355] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fb40) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.070369] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.070373] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.070381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.697 [2024-10-12 22:19:02.070398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fb40, cid 4, qid 0 00:30:43.697 [2024-10-12 22:19:02.070655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.697 [2024-10-12 22:19:02.070662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.697 [2024-10-12 22:19:02.070665] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.070669] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24250d0): datao=0, datal=3072, cccid=4 00:30:43.697 [2024-10-12 22:19:02.070674] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248fb40) on tqpair(0x24250d0): expected_datao=0, payload_size=3072 00:30:43.697 [2024-10-12 22:19:02.070678] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.070695] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.070699] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.114116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.697 [2024-10-12 22:19:02.114128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.697 [2024-10-12 22:19:02.114132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.114136] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fb40) on tqpair=0x24250d0 00:30:43.697 [2024-10-12 22:19:02.114147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.114151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x24250d0) 00:30:43.697 [2024-10-12 22:19:02.114159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.697 [2024-10-12 22:19:02.114182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248fb40, cid 4, qid 0 00:30:43.697 [2024-10-12 22:19:02.114325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.697 [2024-10-12 22:19:02.114332] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.697 [2024-10-12 22:19:02.114335] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.697 [2024-10-12 22:19:02.114339] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x24250d0): datao=0, datal=8, cccid=4 00:30:43.697 [2024-10-12 22:19:02.114344] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248fb40) on tqpair(0x24250d0): expected_datao=0, payload_size=8 00:30:43.698 [2024-10-12 22:19:02.114348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.698 [2024-10-12 22:19:02.114355] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.698 [2024-10-12 22:19:02.114359] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.698 [2024-10-12 22:19:02.156272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.698 [2024-10-12 22:19:02.156282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.698 [2024-10-12 22:19:02.156286] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.698 [2024-10-12 22:19:02.156290] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248fb40) on tqpair=0x24250d0 00:30:43.698 ===================================================== 00:30:43.698 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:43.698 ===================================================== 00:30:43.698 Controller Capabilities/Features 00:30:43.698 ================================ 00:30:43.698 Vendor ID: 0000 00:30:43.698 Subsystem Vendor ID: 0000 00:30:43.698 Serial Number: .................... 00:30:43.698 Model Number: ........................................ 00:30:43.698 Firmware Version: 24.09.1 00:30:43.698 Recommended Arb Burst: 0 00:30:43.698 IEEE OUI Identifier: 00 00 00 00:30:43.698 Multi-path I/O 00:30:43.698 May have multiple subsystem ports: No 00:30:43.698 May have multiple controllers: No 00:30:43.698 Associated with SR-IOV VF: No 00:30:43.698 Max Data Transfer Size: 131072 00:30:43.698 Max Number of Namespaces: 0 00:30:43.698 Max Number of I/O Queues: 1024 00:30:43.698 NVMe Specification Version (VS): 1.3 00:30:43.698 NVMe Specification Version (Identify): 1.3 00:30:43.698 Maximum Queue Entries: 128 00:30:43.698 Contiguous Queues Required: Yes 00:30:43.698 Arbitration Mechanisms Supported 00:30:43.698 Weighted Round Robin: Not Supported 00:30:43.698 Vendor Specific: Not Supported 00:30:43.698 Reset Timeout: 15000 ms 00:30:43.698 Doorbell Stride: 4 bytes 00:30:43.698 NVM Subsystem Reset: Not Supported 00:30:43.698 Command Sets Supported 00:30:43.698 NVM Command Set: Supported 00:30:43.698 Boot Partition: Not Supported 00:30:43.698 Memory Page Size Minimum: 4096 bytes 00:30:43.698 Memory Page Size Maximum: 4096 bytes 00:30:43.698 Persistent Memory Region: Not Supported 00:30:43.698 Optional Asynchronous Events Supported 00:30:43.698 Namespace Attribute Notices: Not Supported 00:30:43.698 Firmware Activation Notices: Not Supported 00:30:43.698 ANA Change Notices: Not Supported 00:30:43.698 PLE Aggregate Log Change Notices: Not Supported 00:30:43.698 LBA Status Info Alert Notices: Not Supported 00:30:43.698 EGE Aggregate Log Change Notices: Not Supported 00:30:43.698 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.698 Zone Descriptor Change Notices: Not Supported 00:30:43.698 Discovery Log Change Notices: Supported 00:30:43.698 Controller Attributes 00:30:43.698 128-bit Host Identifier: Not Supported 00:30:43.698 Non-Operational Permissive Mode: Not Supported 00:30:43.698 NVM Sets: Not Supported 00:30:43.698 Read Recovery Levels: Not Supported 00:30:43.698 Endurance Groups: Not Supported 00:30:43.698 Predictable Latency Mode: Not Supported 00:30:43.698 Traffic Based Keep ALive: Not Supported 00:30:43.698 Namespace Granularity: Not Supported 00:30:43.698 SQ Associations: Not Supported 00:30:43.698 UUID List: Not Supported 00:30:43.698 Multi-Domain Subsystem: Not Supported 00:30:43.698 Fixed Capacity Management: Not Supported 00:30:43.698 Variable Capacity Management: Not Supported 00:30:43.698 Delete Endurance Group: Not Supported 00:30:43.698 Delete NVM Set: Not Supported 00:30:43.698 Extended LBA Formats Supported: Not Supported 00:30:43.698 Flexible Data Placement Supported: Not Supported 00:30:43.698 00:30:43.698 Controller Memory Buffer Support 00:30:43.698 ================================ 00:30:43.698 Supported: No 00:30:43.698 00:30:43.698 Persistent Memory Region Support 00:30:43.698 ================================ 00:30:43.698 Supported: No 00:30:43.698 00:30:43.698 Admin Command Set Attributes 00:30:43.698 ============================ 00:30:43.698 Security Send/Receive: Not Supported 00:30:43.698 Format NVM: Not Supported 00:30:43.698 Firmware Activate/Download: Not Supported 00:30:43.698 Namespace Management: Not Supported 00:30:43.698 Device Self-Test: Not Supported 00:30:43.698 Directives: Not Supported 00:30:43.698 NVMe-MI: Not Supported 00:30:43.698 Virtualization Management: Not Supported 00:30:43.698 Doorbell Buffer Config: Not Supported 00:30:43.698 Get LBA Status Capability: Not Supported 00:30:43.698 Command & Feature Lockdown Capability: Not Supported 00:30:43.698 Abort Command Limit: 1 00:30:43.698 Async Event Request Limit: 4 00:30:43.698 Number of Firmware Slots: N/A 00:30:43.698 Firmware Slot 1 Read-Only: N/A 00:30:43.698 Firmware Activation Without Reset: N/A 00:30:43.698 Multiple Update Detection Support: N/A 00:30:43.698 Firmware Update Granularity: No Information Provided 00:30:43.698 Per-Namespace SMART Log: No 00:30:43.698 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.698 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:43.698 Command Effects Log Page: Not Supported 00:30:43.698 Get Log Page Extended Data: Supported 00:30:43.698 Telemetry Log Pages: Not Supported 00:30:43.698 Persistent Event Log Pages: Not Supported 00:30:43.698 Supported Log Pages Log Page: May Support 00:30:43.698 Commands Supported & Effects Log Page: Not Supported 00:30:43.698 Feature Identifiers & Effects Log Page:May Support 00:30:43.698 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.698 Data Area 4 for Telemetry Log: Not Supported 00:30:43.698 Error Log Page Entries Supported: 128 00:30:43.698 Keep Alive: Not Supported 00:30:43.698 00:30:43.698 NVM Command Set Attributes 00:30:43.698 ========================== 00:30:43.698 Submission Queue Entry Size 00:30:43.698 Max: 1 00:30:43.698 Min: 1 00:30:43.698 Completion Queue Entry Size 00:30:43.698 Max: 1 00:30:43.698 Min: 1 00:30:43.698 Number of Namespaces: 0 00:30:43.698 Compare Command: Not Supported 00:30:43.698 Write Uncorrectable Command: Not Supported 00:30:43.698 Dataset Management Command: Not Supported 00:30:43.698 Write Zeroes Command: Not Supported 00:30:43.698 Set Features Save Field: Not Supported 00:30:43.698 Reservations: Not Supported 00:30:43.698 Timestamp: Not Supported 00:30:43.698 Copy: Not Supported 00:30:43.698 Volatile Write Cache: Not Present 00:30:43.698 Atomic Write Unit (Normal): 1 00:30:43.698 Atomic Write Unit (PFail): 1 00:30:43.698 Atomic Compare & Write Unit: 1 00:30:43.698 Fused Compare & Write: Supported 00:30:43.698 Scatter-Gather List 00:30:43.698 SGL Command Set: Supported 00:30:43.698 SGL Keyed: Supported 00:30:43.698 SGL Bit Bucket Descriptor: Not Supported 00:30:43.698 SGL Metadata Pointer: Not Supported 00:30:43.698 Oversized SGL: Not Supported 00:30:43.698 SGL Metadata Address: Not Supported 00:30:43.698 SGL Offset: Supported 00:30:43.698 Transport SGL Data Block: Not Supported 00:30:43.698 Replay Protected Memory Block: Not Supported 00:30:43.698 00:30:43.698 Firmware Slot Information 00:30:43.698 ========================= 00:30:43.698 Active slot: 0 00:30:43.698 00:30:43.698 00:30:43.698 Error Log 00:30:43.698 ========= 00:30:43.698 00:30:43.698 Active Namespaces 00:30:43.698 ================= 00:30:43.698 Discovery Log Page 00:30:43.698 ================== 00:30:43.698 Generation Counter: 2 00:30:43.698 Number of Records: 2 00:30:43.698 Record Format: 0 00:30:43.698 00:30:43.698 Discovery Log Entry 0 00:30:43.698 ---------------------- 00:30:43.698 Transport Type: 3 (TCP) 00:30:43.698 Address Family: 1 (IPv4) 00:30:43.698 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:43.698 Entry Flags: 00:30:43.698 Duplicate Returned Information: 1 00:30:43.698 Explicit Persistent Connection Support for Discovery: 1 00:30:43.698 Transport Requirements: 00:30:43.698 Secure Channel: Not Required 00:30:43.698 Port ID: 0 (0x0000) 00:30:43.698 Controller ID: 65535 (0xffff) 00:30:43.698 Admin Max SQ Size: 128 00:30:43.698 Transport Service Identifier: 4420 00:30:43.698 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:43.698 Transport Address: 10.0.0.2 00:30:43.698 Discovery Log Entry 1 00:30:43.698 ---------------------- 00:30:43.698 Transport Type: 3 (TCP) 00:30:43.698 Address Family: 1 (IPv4) 00:30:43.698 Subsystem Type: 2 (NVM Subsystem) 00:30:43.698 Entry Flags: 00:30:43.698 Duplicate Returned Information: 0 00:30:43.698 Explicit Persistent Connection Support for Discovery: 0 00:30:43.698 Transport Requirements: 00:30:43.698 Secure Channel: Not Required 00:30:43.698 Port ID: 0 (0x0000) 00:30:43.698 Controller ID: 65535 (0xffff) 00:30:43.698 Admin Max SQ Size: 128 00:30:43.698 Transport Service Identifier: 4420 00:30:43.698 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:43.698 Transport Address: 10.0.0.2 [2024-10-12 22:19:02.156404] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:43.698 [2024-10-12 22:19:02.156417] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f540) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.699 [2024-10-12 22:19:02.156431] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f6c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.699 [2024-10-12 22:19:02.156441] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f840) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.699 [2024-10-12 22:19:02.156451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.699 [2024-10-12 22:19:02.156466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.156482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.156498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.156710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.156716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.156720] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156724] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156731] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156735] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.156745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.156761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.156968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.156974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.156978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.156982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.156987] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:43.699 [2024-10-12 22:19:02.156996] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:43.699 [2024-10-12 22:19:02.157005] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157009] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157013] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.157019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.157030] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.157206] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.157213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.157216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.157231] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.157245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.157256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.157437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.157443] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.157447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157451] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.157460] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157464] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.157474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.157485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.157656] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.157662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.157665] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157669] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.157680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157684] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.157699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.157709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.157900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.157906] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.157909] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157913] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.157923] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.157930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.157937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.157947] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.162110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.162118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.162122] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.162126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.162136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.162140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.162143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x24250d0) 00:30:43.699 [2024-10-12 22:19:02.162150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.699 [2024-10-12 22:19:02.162163] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248f9c0, cid 3, qid 0 00:30:43.699 [2024-10-12 22:19:02.162372] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.699 [2024-10-12 22:19:02.162378] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.699 [2024-10-12 22:19:02.162381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.699 [2024-10-12 22:19:02.162385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248f9c0) on tqpair=0x24250d0 00:30:43.699 [2024-10-12 22:19:02.162393] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:30:43.699 00:30:43.965 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:43.965 [2024-10-12 22:19:02.207027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:43.965 [2024-10-12 22:19:02.207076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667647 ] 00:30:43.965 [2024-10-12 22:19:02.242254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:43.965 [2024-10-12 22:19:02.242320] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:43.965 [2024-10-12 22:19:02.242332] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:43.965 [2024-10-12 22:19:02.242348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:43.965 [2024-10-12 22:19:02.242359] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:43.965 [2024-10-12 22:19:02.246424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:43.965 [2024-10-12 22:19:02.246471] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21950d0 0 00:30:43.965 [2024-10-12 22:19:02.254129] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:43.965 [2024-10-12 22:19:02.254144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:43.965 [2024-10-12 22:19:02.254149] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:43.965 [2024-10-12 22:19:02.254153] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:43.965 [2024-10-12 22:19:02.254188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.254195] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.254199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.254213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:43.965 [2024-10-12 22:19:02.254236] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.262115] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.262125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.262129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.262146] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:43.965 [2024-10-12 22:19:02.262154] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:43.965 [2024-10-12 22:19:02.262159] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:43.965 [2024-10-12 22:19:02.262174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.262192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.262208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.262398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.262405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.262408] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262412] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.262417] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:43.965 [2024-10-12 22:19:02.262425] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:43.965 [2024-10-12 22:19:02.262432] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.262446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.262463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.262619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.262625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.262629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262633] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.262638] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:43.965 [2024-10-12 22:19:02.262646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.262652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.262667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.262677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.262843] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.262849] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.262853] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.262862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.262872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.262880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.262887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.262897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.263092] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.263098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.263106] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.263115] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:43.965 [2024-10-12 22:19:02.263120] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.263128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.263234] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:43.965 [2024-10-12 22:19:02.263238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.263246] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263250] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.263264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.263275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.263373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.263379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.263383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.263391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:43.965 [2024-10-12 22:19:02.263401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.965 [2024-10-12 22:19:02.263415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.965 [2024-10-12 22:19:02.263425] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.965 [2024-10-12 22:19:02.263631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.965 [2024-10-12 22:19:02.263637] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.965 [2024-10-12 22:19:02.263641] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.965 [2024-10-12 22:19:02.263644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.965 [2024-10-12 22:19:02.263649] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:43.965 [2024-10-12 22:19:02.263654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:43.965 [2024-10-12 22:19:02.263663] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:43.965 [2024-10-12 22:19:02.263671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.263680] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.263684] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.263691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.966 [2024-10-12 22:19:02.263701] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.966 [2024-10-12 22:19:02.263871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.966 [2024-10-12 22:19:02.263878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.966 [2024-10-12 22:19:02.263881] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.263885] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=4096, cccid=0 00:30:43.966 [2024-10-12 22:19:02.263890] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ff540) on tqpair(0x21950d0): expected_datao=0, payload_size=4096 00:30:43.966 [2024-10-12 22:19:02.263895] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.263935] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.263939] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.966 [2024-10-12 22:19:02.264067] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.966 [2024-10-12 22:19:02.264071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.966 [2024-10-12 22:19:02.264083] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:43.966 [2024-10-12 22:19:02.264088] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:43.966 [2024-10-12 22:19:02.264092] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:43.966 [2024-10-12 22:19:02.264096] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:43.966 [2024-10-12 22:19:02.264101] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:43.966 [2024-10-12 22:19:02.264112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264120] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264127] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264131] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264135] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.966 [2024-10-12 22:19:02.264153] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.966 [2024-10-12 22:19:02.264375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.966 [2024-10-12 22:19:02.264381] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.966 [2024-10-12 22:19:02.264385] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264389] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.966 [2024-10-12 22:19:02.264396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.966 [2024-10-12 22:19:02.264416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264419] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264423] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.966 [2024-10-12 22:19:02.264435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264442] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.966 [2024-10-12 22:19:02.264454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.966 [2024-10-12 22:19:02.264475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264492] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264496] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.966 [2024-10-12 22:19:02.264516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff540, cid 0, qid 0 00:30:43.966 [2024-10-12 22:19:02.264522] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff6c0, cid 1, qid 0 00:30:43.966 [2024-10-12 22:19:02.264526] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff840, cid 2, qid 0 00:30:43.966 [2024-10-12 22:19:02.264531] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.966 [2024-10-12 22:19:02.264536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.966 [2024-10-12 22:19:02.264692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.966 [2024-10-12 22:19:02.264698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.966 [2024-10-12 22:19:02.264702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.966 [2024-10-12 22:19:02.264711] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:43.966 [2024-10-12 22:19:02.264716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264735] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.264756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:43.966 [2024-10-12 22:19:02.264767] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.966 [2024-10-12 22:19:02.264893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.966 [2024-10-12 22:19:02.264899] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.966 [2024-10-12 22:19:02.264902] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264906] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.966 [2024-10-12 22:19:02.264975] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264984] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.264991] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.264995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.265003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.966 [2024-10-12 22:19:02.265014] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.966 [2024-10-12 22:19:02.265142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.966 [2024-10-12 22:19:02.265150] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.966 [2024-10-12 22:19:02.265153] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.265157] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=4096, cccid=4 00:30:43.966 [2024-10-12 22:19:02.265161] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffb40) on tqpair(0x21950d0): expected_datao=0, payload_size=4096 00:30:43.966 [2024-10-12 22:19:02.265166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.265179] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.265183] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.310112] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.966 [2024-10-12 22:19:02.310120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.966 [2024-10-12 22:19:02.310124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.310128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.966 [2024-10-12 22:19:02.310140] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:43.966 [2024-10-12 22:19:02.310152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.310162] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:43.966 [2024-10-12 22:19:02.310169] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.310173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.966 [2024-10-12 22:19:02.310180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.966 [2024-10-12 22:19:02.310194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.966 [2024-10-12 22:19:02.310399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.966 [2024-10-12 22:19:02.310406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.966 [2024-10-12 22:19:02.310409] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.966 [2024-10-12 22:19:02.310413] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=4096, cccid=4 00:30:43.966 [2024-10-12 22:19:02.310418] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffb40) on tqpair(0x21950d0): expected_datao=0, payload_size=4096 00:30:43.966 [2024-10-12 22:19:02.310422] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310429] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310433] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.310588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.310592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.310610] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.310629] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.310636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310640] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.310647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.310658] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.967 [2024-10-12 22:19:02.310917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.967 [2024-10-12 22:19:02.310924] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.967 [2024-10-12 22:19:02.310927] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=4096, cccid=4 00:30:43.967 [2024-10-12 22:19:02.310935] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffb40) on tqpair(0x21950d0): expected_datao=0, payload_size=4096 00:30:43.967 [2024-10-12 22:19:02.310940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310946] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.310950] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.352281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.352285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.352298] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352307] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352325] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352336] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352341] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:43.967 [2024-10-12 22:19:02.352346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:43.967 [2024-10-12 22:19:02.352351] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:43.967 [2024-10-12 22:19:02.352371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352376] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.352384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.352391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352398] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.352408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.967 [2024-10-12 22:19:02.352422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.967 [2024-10-12 22:19:02.352427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffcc0, cid 5, qid 0 00:30:43.967 [2024-10-12 22:19:02.352629] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.352635] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.352639] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352643] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.352650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.352656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.352659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffcc0) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.352673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.352683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.352694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffcc0, cid 5, qid 0 00:30:43.967 [2024-10-12 22:19:02.352830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.352836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.352840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffcc0) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.352853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.352856] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.352863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.352872] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffcc0, cid 5, qid 0 00:30:43.967 [2024-10-12 22:19:02.353036] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.353042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.353046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.353049] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffcc0) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.353059] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.353062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.353069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.353078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffcc0, cid 5, qid 0 00:30:43.967 [2024-10-12 22:19:02.357113] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.967 [2024-10-12 22:19:02.357120] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.967 [2024-10-12 22:19:02.357124] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffcc0) on tqpair=0x21950d0 00:30:43.967 [2024-10-12 22:19:02.357148] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357152] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.357159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.357166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357170] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.357176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.357183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.357193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.357203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357207] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21950d0) 00:30:43.967 [2024-10-12 22:19:02.357213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.967 [2024-10-12 22:19:02.357226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffcc0, cid 5, qid 0 00:30:43.967 [2024-10-12 22:19:02.357232] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffb40, cid 4, qid 0 00:30:43.967 [2024-10-12 22:19:02.357237] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ffe40, cid 6, qid 0 00:30:43.967 [2024-10-12 22:19:02.357241] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fffc0, cid 7, qid 0 00:30:43.967 [2024-10-12 22:19:02.357508] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.967 [2024-10-12 22:19:02.357514] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.967 [2024-10-12 22:19:02.357518] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=8192, cccid=5 00:30:43.967 [2024-10-12 22:19:02.357527] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffcc0) on tqpair(0x21950d0): expected_datao=0, payload_size=8192 00:30:43.967 [2024-10-12 22:19:02.357531] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357625] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357630] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357636] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.967 [2024-10-12 22:19:02.357642] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.967 [2024-10-12 22:19:02.357645] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357649] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=512, cccid=4 00:30:43.967 [2024-10-12 22:19:02.357653] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffb40) on tqpair(0x21950d0): expected_datao=0, payload_size=512 00:30:43.967 [2024-10-12 22:19:02.357658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357707] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357710] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.967 [2024-10-12 22:19:02.357716] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.968 [2024-10-12 22:19:02.357722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.968 [2024-10-12 22:19:02.357725] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=512, cccid=6 00:30:43.968 [2024-10-12 22:19:02.357736] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21ffe40) on tqpair(0x21950d0): expected_datao=0, payload_size=512 00:30:43.968 [2024-10-12 22:19:02.357740] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357746] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357750] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:43.968 [2024-10-12 22:19:02.357761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:43.968 [2024-10-12 22:19:02.357765] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21950d0): datao=0, datal=4096, cccid=7 00:30:43.968 [2024-10-12 22:19:02.357773] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fffc0) on tqpair(0x21950d0): expected_datao=0, payload_size=4096 00:30:43.968 [2024-10-12 22:19:02.357777] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357784] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357788] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357915] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.968 [2024-10-12 22:19:02.357921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.968 [2024-10-12 22:19:02.357925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357929] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffcc0) on tqpair=0x21950d0 00:30:43.968 [2024-10-12 22:19:02.357941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.968 [2024-10-12 22:19:02.357947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.968 [2024-10-12 22:19:02.357951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffb40) on tqpair=0x21950d0 00:30:43.968 [2024-10-12 22:19:02.357965] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.968 [2024-10-12 22:19:02.357971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.968 [2024-10-12 22:19:02.357974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357978] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ffe40) on tqpair=0x21950d0 00:30:43.968 [2024-10-12 22:19:02.357985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.968 [2024-10-12 22:19:02.357991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.968 [2024-10-12 22:19:02.357995] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.968 [2024-10-12 22:19:02.357998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21fffc0) on tqpair=0x21950d0 00:30:43.968 ===================================================== 00:30:43.968 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.968 ===================================================== 00:30:43.968 Controller Capabilities/Features 00:30:43.968 ================================ 00:30:43.968 Vendor ID: 8086 00:30:43.968 Subsystem Vendor ID: 8086 00:30:43.968 Serial Number: SPDK00000000000001 00:30:43.968 Model Number: SPDK bdev Controller 00:30:43.968 Firmware Version: 24.09.1 00:30:43.968 Recommended Arb Burst: 6 00:30:43.968 IEEE OUI Identifier: e4 d2 5c 00:30:43.968 Multi-path I/O 00:30:43.968 May have multiple subsystem ports: Yes 00:30:43.968 May have multiple controllers: Yes 00:30:43.968 Associated with SR-IOV VF: No 00:30:43.968 Max Data Transfer Size: 131072 00:30:43.968 Max Number of Namespaces: 32 00:30:43.968 Max Number of I/O Queues: 127 00:30:43.968 NVMe Specification Version (VS): 1.3 00:30:43.968 NVMe Specification Version (Identify): 1.3 00:30:43.968 Maximum Queue Entries: 128 00:30:43.968 Contiguous Queues Required: Yes 00:30:43.968 Arbitration Mechanisms Supported 00:30:43.968 Weighted Round Robin: Not Supported 00:30:43.968 Vendor Specific: Not Supported 00:30:43.968 Reset Timeout: 15000 ms 00:30:43.968 Doorbell Stride: 4 bytes 00:30:43.968 NVM Subsystem Reset: Not Supported 00:30:43.968 Command Sets Supported 00:30:43.968 NVM Command Set: Supported 00:30:43.968 Boot Partition: Not Supported 00:30:43.968 Memory Page Size Minimum: 4096 bytes 00:30:43.968 Memory Page Size Maximum: 4096 bytes 00:30:43.968 Persistent Memory Region: Not Supported 00:30:43.968 Optional Asynchronous Events Supported 00:30:43.968 Namespace Attribute Notices: Supported 00:30:43.968 Firmware Activation Notices: Not Supported 00:30:43.968 ANA Change Notices: Not Supported 00:30:43.968 PLE Aggregate Log Change Notices: Not Supported 00:30:43.968 LBA Status Info Alert Notices: Not Supported 00:30:43.968 EGE Aggregate Log Change Notices: Not Supported 00:30:43.968 Normal NVM Subsystem Shutdown event: Not Supported 00:30:43.968 Zone Descriptor Change Notices: Not Supported 00:30:43.968 Discovery Log Change Notices: Not Supported 00:30:43.968 Controller Attributes 00:30:43.968 128-bit Host Identifier: Supported 00:30:43.968 Non-Operational Permissive Mode: Not Supported 00:30:43.968 NVM Sets: Not Supported 00:30:43.968 Read Recovery Levels: Not Supported 00:30:43.968 Endurance Groups: Not Supported 00:30:43.968 Predictable Latency Mode: Not Supported 00:30:43.968 Traffic Based Keep ALive: Not Supported 00:30:43.968 Namespace Granularity: Not Supported 00:30:43.968 SQ Associations: Not Supported 00:30:43.968 UUID List: Not Supported 00:30:43.968 Multi-Domain Subsystem: Not Supported 00:30:43.968 Fixed Capacity Management: Not Supported 00:30:43.968 Variable Capacity Management: Not Supported 00:30:43.968 Delete Endurance Group: Not Supported 00:30:43.968 Delete NVM Set: Not Supported 00:30:43.968 Extended LBA Formats Supported: Not Supported 00:30:43.968 Flexible Data Placement Supported: Not Supported 00:30:43.968 00:30:43.968 Controller Memory Buffer Support 00:30:43.968 ================================ 00:30:43.968 Supported: No 00:30:43.968 00:30:43.968 Persistent Memory Region Support 00:30:43.968 ================================ 00:30:43.968 Supported: No 00:30:43.968 00:30:43.968 Admin Command Set Attributes 00:30:43.968 ============================ 00:30:43.968 Security Send/Receive: Not Supported 00:30:43.968 Format NVM: Not Supported 00:30:43.968 Firmware Activate/Download: Not Supported 00:30:43.968 Namespace Management: Not Supported 00:30:43.968 Device Self-Test: Not Supported 00:30:43.968 Directives: Not Supported 00:30:43.968 NVMe-MI: Not Supported 00:30:43.968 Virtualization Management: Not Supported 00:30:43.968 Doorbell Buffer Config: Not Supported 00:30:43.968 Get LBA Status Capability: Not Supported 00:30:43.968 Command & Feature Lockdown Capability: Not Supported 00:30:43.968 Abort Command Limit: 4 00:30:43.968 Async Event Request Limit: 4 00:30:43.968 Number of Firmware Slots: N/A 00:30:43.968 Firmware Slot 1 Read-Only: N/A 00:30:43.968 Firmware Activation Without Reset: N/A 00:30:43.968 Multiple Update Detection Support: N/A 00:30:43.968 Firmware Update Granularity: No Information Provided 00:30:43.968 Per-Namespace SMART Log: No 00:30:43.968 Asymmetric Namespace Access Log Page: Not Supported 00:30:43.968 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:43.968 Command Effects Log Page: Supported 00:30:43.968 Get Log Page Extended Data: Supported 00:30:43.968 Telemetry Log Pages: Not Supported 00:30:43.968 Persistent Event Log Pages: Not Supported 00:30:43.968 Supported Log Pages Log Page: May Support 00:30:43.968 Commands Supported & Effects Log Page: Not Supported 00:30:43.968 Feature Identifiers & Effects Log Page:May Support 00:30:43.968 NVMe-MI Commands & Effects Log Page: May Support 00:30:43.968 Data Area 4 for Telemetry Log: Not Supported 00:30:43.968 Error Log Page Entries Supported: 128 00:30:43.968 Keep Alive: Supported 00:30:43.968 Keep Alive Granularity: 10000 ms 00:30:43.968 00:30:43.968 NVM Command Set Attributes 00:30:43.968 ========================== 00:30:43.968 Submission Queue Entry Size 00:30:43.968 Max: 64 00:30:43.968 Min: 64 00:30:43.968 Completion Queue Entry Size 00:30:43.968 Max: 16 00:30:43.968 Min: 16 00:30:43.968 Number of Namespaces: 32 00:30:43.968 Compare Command: Supported 00:30:43.968 Write Uncorrectable Command: Not Supported 00:30:43.968 Dataset Management Command: Supported 00:30:43.968 Write Zeroes Command: Supported 00:30:43.968 Set Features Save Field: Not Supported 00:30:43.968 Reservations: Supported 00:30:43.968 Timestamp: Not Supported 00:30:43.968 Copy: Supported 00:30:43.968 Volatile Write Cache: Present 00:30:43.968 Atomic Write Unit (Normal): 1 00:30:43.968 Atomic Write Unit (PFail): 1 00:30:43.968 Atomic Compare & Write Unit: 1 00:30:43.968 Fused Compare & Write: Supported 00:30:43.968 Scatter-Gather List 00:30:43.968 SGL Command Set: Supported 00:30:43.968 SGL Keyed: Supported 00:30:43.968 SGL Bit Bucket Descriptor: Not Supported 00:30:43.968 SGL Metadata Pointer: Not Supported 00:30:43.968 Oversized SGL: Not Supported 00:30:43.968 SGL Metadata Address: Not Supported 00:30:43.968 SGL Offset: Supported 00:30:43.968 Transport SGL Data Block: Not Supported 00:30:43.968 Replay Protected Memory Block: Not Supported 00:30:43.968 00:30:43.968 Firmware Slot Information 00:30:43.968 ========================= 00:30:43.968 Active slot: 1 00:30:43.968 Slot 1 Firmware Revision: 24.09.1 00:30:43.968 00:30:43.968 00:30:43.968 Commands Supported and Effects 00:30:43.968 ============================== 00:30:43.968 Admin Commands 00:30:43.968 -------------- 00:30:43.968 Get Log Page (02h): Supported 00:30:43.968 Identify (06h): Supported 00:30:43.968 Abort (08h): Supported 00:30:43.968 Set Features (09h): Supported 00:30:43.968 Get Features (0Ah): Supported 00:30:43.968 Asynchronous Event Request (0Ch): Supported 00:30:43.968 Keep Alive (18h): Supported 00:30:43.969 I/O Commands 00:30:43.969 ------------ 00:30:43.969 Flush (00h): Supported LBA-Change 00:30:43.969 Write (01h): Supported LBA-Change 00:30:43.969 Read (02h): Supported 00:30:43.969 Compare (05h): Supported 00:30:43.969 Write Zeroes (08h): Supported LBA-Change 00:30:43.969 Dataset Management (09h): Supported LBA-Change 00:30:43.969 Copy (19h): Supported LBA-Change 00:30:43.969 00:30:43.969 Error Log 00:30:43.969 ========= 00:30:43.969 00:30:43.969 Arbitration 00:30:43.969 =========== 00:30:43.969 Arbitration Burst: 1 00:30:43.969 00:30:43.969 Power Management 00:30:43.969 ================ 00:30:43.969 Number of Power States: 1 00:30:43.969 Current Power State: Power State #0 00:30:43.969 Power State #0: 00:30:43.969 Max Power: 0.00 W 00:30:43.969 Non-Operational State: Operational 00:30:43.969 Entry Latency: Not Reported 00:30:43.969 Exit Latency: Not Reported 00:30:43.969 Relative Read Throughput: 0 00:30:43.969 Relative Read Latency: 0 00:30:43.969 Relative Write Throughput: 0 00:30:43.969 Relative Write Latency: 0 00:30:43.969 Idle Power: Not Reported 00:30:43.969 Active Power: Not Reported 00:30:43.969 Non-Operational Permissive Mode: Not Supported 00:30:43.969 00:30:43.969 Health Information 00:30:43.969 ================== 00:30:43.969 Critical Warnings: 00:30:43.969 Available Spare Space: OK 00:30:43.969 Temperature: OK 00:30:43.969 Device Reliability: OK 00:30:43.969 Read Only: No 00:30:43.969 Volatile Memory Backup: OK 00:30:43.969 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:43.969 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:43.969 Available Spare: 0% 00:30:43.969 Available Spare Threshold: 0% 00:30:43.969 Life Percentage U[2024-10-12 22:19:02.358112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.358125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.358137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fffc0, cid 7, qid 0 00:30:43.969 [2024-10-12 22:19:02.358337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.358344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.358347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21fffc0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358391] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:43.969 [2024-10-12 22:19:02.358403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff540) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.969 [2024-10-12 22:19:02.358415] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff6c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.969 [2024-10-12 22:19:02.358425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff840) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.969 [2024-10-12 22:19:02.358434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.969 [2024-10-12 22:19:02.358448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358452] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358456] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.358463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.358475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.358594] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.358600] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.358604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358619] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358622] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.358629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.358643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.358875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.358882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.358885] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.358894] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:43.969 [2024-10-12 22:19:02.358898] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:43.969 [2024-10-12 22:19:02.358907] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.358915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.358921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.358932] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.359101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.359112] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.359116] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359120] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.359131] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359138] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.359145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.359156] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.359252] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.359258] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.359262] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.359275] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359279] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.359289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.359300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.359481] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.359487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.359490] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359494] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.969 [2024-10-12 22:19:02.359504] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359508] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.969 [2024-10-12 22:19:02.359511] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.969 [2024-10-12 22:19:02.359518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.969 [2024-10-12 22:19:02.359528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.969 [2024-10-12 22:19:02.359687] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.969 [2024-10-12 22:19:02.359693] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.969 [2024-10-12 22:19:02.359697] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359701] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.359712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359715] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359719] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.359726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.359736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.359934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.359942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.359946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.359959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.359967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.359974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.359984] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.360210] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.360217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.360221] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.360234] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360238] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.360249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.360259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.360453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.360460] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.360463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.360477] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.360491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.360501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.360689] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.360695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.360699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360703] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.360712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360716] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360720] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.360726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.360736] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.360914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.360921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.360926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360930] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.360940] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.360947] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.360954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.360964] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.361099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.365113] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.365118] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.365122] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.365133] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.365136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.365140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21950d0) 00:30:43.970 [2024-10-12 22:19:02.365147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:43.970 [2024-10-12 22:19:02.365159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21ff9c0, cid 3, qid 0 00:30:43.970 [2024-10-12 22:19:02.365352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:43.970 [2024-10-12 22:19:02.365358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:43.970 [2024-10-12 22:19:02.365362] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:43.970 [2024-10-12 22:19:02.365365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21ff9c0) on tqpair=0x21950d0 00:30:43.970 [2024-10-12 22:19:02.365373] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:30:43.970 sed: 0% 00:30:43.970 Data Units Read: 0 00:30:43.970 Data Units Written: 0 00:30:43.970 Host Read Commands: 0 00:30:43.970 Host Write Commands: 0 00:30:43.970 Controller Busy Time: 0 minutes 00:30:43.970 Power Cycles: 0 00:30:43.970 Power On Hours: 0 hours 00:30:43.970 Unsafe Shutdowns: 0 00:30:43.970 Unrecoverable Media Errors: 0 00:30:43.970 Lifetime Error Log Entries: 0 00:30:43.970 Warning Temperature Time: 0 minutes 00:30:43.970 Critical Temperature Time: 0 minutes 00:30:43.970 00:30:43.970 Number of Queues 00:30:43.970 ================ 00:30:43.970 Number of I/O Submission Queues: 127 00:30:43.970 Number of I/O Completion Queues: 127 00:30:43.970 00:30:43.970 Active Namespaces 00:30:43.970 ================= 00:30:43.970 Namespace ID:1 00:30:43.970 Error Recovery Timeout: Unlimited 00:30:43.970 Command Set Identifier: NVM (00h) 00:30:43.970 Deallocate: Supported 00:30:43.970 Deallocated/Unwritten Error: Not Supported 00:30:43.970 Deallocated Read Value: Unknown 00:30:43.970 Deallocate in Write Zeroes: Not Supported 00:30:43.970 Deallocated Guard Field: 0xFFFF 00:30:43.970 Flush: Supported 00:30:43.970 Reservation: Supported 00:30:43.970 Namespace Sharing Capabilities: Multiple Controllers 00:30:43.970 Size (in LBAs): 131072 (0GiB) 00:30:43.970 Capacity (in LBAs): 131072 (0GiB) 00:30:43.970 Utilization (in LBAs): 131072 (0GiB) 00:30:43.970 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:43.970 EUI64: ABCDEF0123456789 00:30:43.970 UUID: 74bcae42-597d-461c-988d-b377b105a2fe 00:30:43.970 Thin Provisioning: Not Supported 00:30:43.970 Per-NS Atomic Units: Yes 00:30:43.970 Atomic Boundary Size (Normal): 0 00:30:43.970 Atomic Boundary Size (PFail): 0 00:30:43.970 Atomic Boundary Offset: 0 00:30:43.970 Maximum Single Source Range Length: 65535 00:30:43.970 Maximum Copy Length: 65535 00:30:43.970 Maximum Source Range Count: 1 00:30:43.970 NGUID/EUI64 Never Reused: No 00:30:43.970 Namespace Write Protected: No 00:30:43.970 Number of LBA Formats: 1 00:30:43.970 Current LBA Format: LBA Format #00 00:30:43.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:43.970 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.970 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.970 rmmod nvme_tcp 00:30:43.970 rmmod nvme_fabrics 00:30:43.970 rmmod nvme_keyring 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 3667296 ']' 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 3667296 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3667296 ']' 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3667296 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3667296 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3667296' 00:30:44.232 killing process with pid 3667296 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3667296 00:30:44.232 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3667296 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.493 22:19:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.404 22:19:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.404 00:30:46.404 real 0m11.717s 00:30:46.404 user 0m8.913s 00:30:46.404 sys 0m6.136s 00:30:46.404 22:19:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:46.405 22:19:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:46.405 ************************************ 00:30:46.405 END TEST nvmf_identify 00:30:46.405 ************************************ 00:30:46.405 22:19:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.405 22:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:46.405 22:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:46.405 22:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.666 ************************************ 00:30:46.666 START TEST nvmf_perf 00:30:46.666 ************************************ 00:30:46.666 22:19:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:46.666 * Looking for test storage... 00:30:46.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:46.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.666 --rc genhtml_branch_coverage=1 00:30:46.666 --rc genhtml_function_coverage=1 00:30:46.666 --rc genhtml_legend=1 00:30:46.666 --rc geninfo_all_blocks=1 00:30:46.666 --rc geninfo_unexecuted_blocks=1 00:30:46.666 00:30:46.666 ' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:46.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.666 --rc genhtml_branch_coverage=1 00:30:46.666 --rc genhtml_function_coverage=1 00:30:46.666 --rc genhtml_legend=1 00:30:46.666 --rc geninfo_all_blocks=1 00:30:46.666 --rc geninfo_unexecuted_blocks=1 00:30:46.666 00:30:46.666 ' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:46.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.666 --rc genhtml_branch_coverage=1 00:30:46.666 --rc genhtml_function_coverage=1 00:30:46.666 --rc genhtml_legend=1 00:30:46.666 --rc geninfo_all_blocks=1 00:30:46.666 --rc geninfo_unexecuted_blocks=1 00:30:46.666 00:30:46.666 ' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:46.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.666 --rc genhtml_branch_coverage=1 00:30:46.666 --rc genhtml_function_coverage=1 00:30:46.666 --rc genhtml_legend=1 00:30:46.666 --rc geninfo_all_blocks=1 00:30:46.666 --rc geninfo_unexecuted_blocks=1 00:30:46.666 00:30:46.666 ' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:30:46.666 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:30:46.667 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:30:46.667 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.667 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.667 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.927 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:30:46.927 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:30:46.927 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.927 22:19:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:55.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:55.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:55.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:55.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:30:55.072 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:30:55.073 00:30:55.073 --- 10.0.0.2 ping statistics --- 00:30:55.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.073 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:30:55.073 00:30:55.073 --- 10.0.0.1 ping statistics --- 00:30:55.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.073 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=3671895 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 3671895 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3671895 ']' 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.073 22:19:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.073 [2024-10-12 22:19:12.734359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:55.073 [2024-10-12 22:19:12.734427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.073 [2024-10-12 22:19:12.824525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:55.073 [2024-10-12 22:19:12.873087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.073 [2024-10-12 22:19:12.873144] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.073 [2024-10-12 22:19:12.873153] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.073 [2024-10-12 22:19:12.873160] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.073 [2024-10-12 22:19:12.873167] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.073 [2024-10-12 22:19:12.873262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.073 [2024-10-12 22:19:12.873424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.073 [2024-10-12 22:19:12.873577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.073 [2024-10-12 22:19:12.873578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:55.334 22:19:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:55.906 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:55.906 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:55.906 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:30:55.906 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:56.167 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:56.167 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:30:56.167 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:56.167 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:56.167 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:56.427 [2024-10-12 22:19:14.703800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.427 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:56.688 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:56.688 22:19:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:56.688 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:56.688 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:56.949 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.210 [2024-10-12 22:19:15.503495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.210 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.471 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:30:57.471 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:57.471 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:57.471 22:19:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:58.855 Initializing NVMe Controllers 00:30:58.855 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:30:58.855 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:30:58.855 Initialization complete. Launching workers. 00:30:58.855 ======================================================== 00:30:58.855 Latency(us) 00:30:58.855 Device Information : IOPS MiB/s Average min max 00:30:58.855 PCIE (0000:65:00.0) NSID 1 from core 0: 78560.88 306.88 406.59 13.56 7203.76 00:30:58.855 ======================================================== 00:30:58.855 Total : 78560.88 306.88 406.59 13.56 7203.76 00:30:58.855 00:30:58.856 22:19:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.239 Initializing NVMe Controllers 00:31:00.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.239 Initialization complete. Launching workers. 00:31:00.239 ======================================================== 00:31:00.239 Latency(us) 00:31:00.239 Device Information : IOPS MiB/s Average min max 00:31:00.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 53.80 0.21 18590.63 266.11 46010.79 00:31:00.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.78 0.24 17106.56 6983.88 47899.75 00:31:00.239 ======================================================== 00:31:00.239 Total : 114.58 0.45 17803.43 266.11 47899.75 00:31:00.239 00:31:00.239 22:19:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.622 Initializing NVMe Controllers 00:31:01.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:01.622 Initialization complete. Launching workers. 00:31:01.622 ======================================================== 00:31:01.622 Latency(us) 00:31:01.622 Device Information : IOPS MiB/s Average min max 00:31:01.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12423.00 48.53 2577.83 408.30 6331.41 00:31:01.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3841.00 15.00 8376.36 6465.21 16181.85 00:31:01.622 ======================================================== 00:31:01.622 Total : 16264.00 63.53 3947.25 408.30 16181.85 00:31:01.622 00:31:01.622 22:19:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:01.622 22:19:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:01.622 22:19:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.163 Initializing NVMe Controllers 00:31:04.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.163 Controller IO queue size 128, less than required. 00:31:04.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.163 Controller IO queue size 128, less than required. 00:31:04.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:04.163 Initialization complete. Launching workers. 00:31:04.163 ======================================================== 00:31:04.163 Latency(us) 00:31:04.163 Device Information : IOPS MiB/s Average min max 00:31:04.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1884.49 471.12 68355.30 41169.20 118892.81 00:31:04.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.50 151.12 221910.80 62503.98 349992.17 00:31:04.163 ======================================================== 00:31:04.163 Total : 2488.99 622.25 105649.11 41169.20 349992.17 00:31:04.163 00:31:04.163 22:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:04.424 No valid NVMe controllers or AIO or URING devices found 00:31:04.424 Initializing NVMe Controllers 00:31:04.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.424 Controller IO queue size 128, less than required. 00:31:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.424 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:04.424 Controller IO queue size 128, less than required. 00:31:04.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:04.424 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:04.424 WARNING: Some requested NVMe devices were skipped 00:31:04.424 22:19:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:06.966 Initializing NVMe Controllers 00:31:06.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.966 Controller IO queue size 128, less than required. 00:31:06.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.966 Controller IO queue size 128, less than required. 00:31:06.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:06.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:06.966 Initialization complete. Launching workers. 00:31:06.966 00:31:06.966 ==================== 00:31:06.966 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:06.966 TCP transport: 00:31:06.966 polls: 31460 00:31:06.966 idle_polls: 19103 00:31:06.966 sock_completions: 12357 00:31:06.966 nvme_completions: 6863 00:31:06.966 submitted_requests: 10262 00:31:06.966 queued_requests: 1 00:31:06.966 00:31:06.966 ==================== 00:31:06.966 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:06.966 TCP transport: 00:31:06.966 polls: 28653 00:31:06.966 idle_polls: 14107 00:31:06.966 sock_completions: 14546 00:31:06.966 nvme_completions: 9681 00:31:06.966 submitted_requests: 14434 00:31:06.966 queued_requests: 1 00:31:06.966 ======================================================== 00:31:06.966 Latency(us) 00:31:06.967 Device Information : IOPS MiB/s Average min max 00:31:06.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1715.49 428.87 76418.25 32335.71 130476.72 00:31:06.967 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2419.99 605.00 53066.71 24749.75 95348.74 00:31:06.967 ======================================================== 00:31:06.967 Total : 4135.48 1033.87 62753.46 24749.75 130476.72 00:31:06.967 00:31:06.967 22:19:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:06.967 22:19:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.967 22:19:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:06.967 22:19:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:06.967 22:19:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=6220b328-912c-4a7b-8e1d-5a8bcd04da65 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 6220b328-912c-4a7b-8e1d-5a8bcd04da65 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=6220b328-912c-4a7b-8e1d-5a8bcd04da65 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:08.351 { 00:31:08.351 "uuid": "6220b328-912c-4a7b-8e1d-5a8bcd04da65", 00:31:08.351 "name": "lvs_0", 00:31:08.351 "base_bdev": "Nvme0n1", 00:31:08.351 "total_data_clusters": 457407, 00:31:08.351 "free_clusters": 457407, 00:31:08.351 "block_size": 512, 00:31:08.351 "cluster_size": 4194304 00:31:08.351 } 00:31:08.351 ]' 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="6220b328-912c-4a7b-8e1d-5a8bcd04da65") .free_clusters' 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="6220b328-912c-4a7b-8e1d-5a8bcd04da65") .cluster_size' 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:31:08.351 1829628 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:08.351 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6220b328-912c-4a7b-8e1d-5a8bcd04da65 lbd_0 20480 00:31:08.611 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=721ac925-08c5-4ea5-a4e0-606300f5c9ad 00:31:08.611 22:19:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 721ac925-08c5-4ea5-a4e0-606300f5c9ad lvs_n_0 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2c00cdb6-3aa6-4607-a94f-29d3c1cb2795 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2c00cdb6-3aa6-4607-a94f-29d3c1cb2795 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2c00cdb6-3aa6-4607-a94f-29d3c1cb2795 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:10.526 { 00:31:10.526 "uuid": "6220b328-912c-4a7b-8e1d-5a8bcd04da65", 00:31:10.526 "name": "lvs_0", 00:31:10.526 "base_bdev": "Nvme0n1", 00:31:10.526 "total_data_clusters": 457407, 00:31:10.526 "free_clusters": 452287, 00:31:10.526 "block_size": 512, 00:31:10.526 "cluster_size": 4194304 00:31:10.526 }, 00:31:10.526 { 00:31:10.526 "uuid": "2c00cdb6-3aa6-4607-a94f-29d3c1cb2795", 00:31:10.526 "name": "lvs_n_0", 00:31:10.526 "base_bdev": "721ac925-08c5-4ea5-a4e0-606300f5c9ad", 00:31:10.526 "total_data_clusters": 5114, 00:31:10.526 "free_clusters": 5114, 00:31:10.526 "block_size": 512, 00:31:10.526 "cluster_size": 4194304 00:31:10.526 } 00:31:10.526 ]' 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2c00cdb6-3aa6-4607-a94f-29d3c1cb2795") .free_clusters' 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2c00cdb6-3aa6-4607-a94f-29d3c1cb2795") .cluster_size' 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:10.526 20456 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c00cdb6-3aa6-4607-a94f-29d3c1cb2795 lbd_nest_0 20456 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2f00863c-efc2-4e2c-b7d5-6c057ead3b9d 00:31:10.526 22:19:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.801 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:10.801 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2f00863c-efc2-4e2c-b7d5-6c057ead3b9d 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:11.065 22:19:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:23.289 Initializing NVMe Controllers 00:31:23.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:23.289 Initialization complete. Launching workers. 00:31:23.289 ======================================================== 00:31:23.289 Latency(us) 00:31:23.289 Device Information : IOPS MiB/s Average min max 00:31:23.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.40 0.02 21647.20 307.87 45946.97 00:31:23.289 ======================================================== 00:31:23.289 Total : 46.40 0.02 21647.20 307.87 45946.97 00:31:23.289 00:31:23.289 22:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:23.289 22:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.284 Initializing NVMe Controllers 00:31:33.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:33.284 Initialization complete. Launching workers. 00:31:33.284 ======================================================== 00:31:33.284 Latency(us) 00:31:33.284 Device Information : IOPS MiB/s Average min max 00:31:33.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.70 7.71 16227.35 7968.62 55868.58 00:31:33.284 ======================================================== 00:31:33.284 Total : 61.70 7.71 16227.35 7968.62 55868.58 00:31:33.284 00:31:33.284 22:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:33.285 22:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:33.285 22:19:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:43.373 Initializing NVMe Controllers 00:31:43.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:43.373 Initialization complete. Launching workers. 00:31:43.373 ======================================================== 00:31:43.373 Latency(us) 00:31:43.373 Device Information : IOPS MiB/s Average min max 00:31:43.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8842.21 4.32 3625.97 377.56 47888.80 00:31:43.373 ======================================================== 00:31:43.373 Total : 8842.21 4.32 3625.97 377.56 47888.80 00:31:43.373 00:31:43.373 22:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:43.373 22:20:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:53.375 Initializing NVMe Controllers 00:31:53.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:53.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:53.375 Initialization complete. Launching workers. 00:31:53.375 ======================================================== 00:31:53.375 Latency(us) 00:31:53.375 Device Information : IOPS MiB/s Average min max 00:31:53.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4021.19 502.65 7957.86 398.78 25578.84 00:31:53.375 ======================================================== 00:31:53.375 Total : 4021.19 502.65 7957.86 398.78 25578.84 00:31:53.375 00:31:53.375 22:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:53.375 22:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:53.375 22:20:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.374 Initializing NVMe Controllers 00:32:03.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.374 Controller IO queue size 128, less than required. 00:32:03.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:03.374 Initialization complete. Launching workers. 00:32:03.374 ======================================================== 00:32:03.374 Latency(us) 00:32:03.374 Device Information : IOPS MiB/s Average min max 00:32:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15813.30 7.72 8099.11 1457.07 49737.02 00:32:03.374 ======================================================== 00:32:03.374 Total : 15813.30 7.72 8099.11 1457.07 49737.02 00:32:03.374 00:32:03.374 22:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:03.374 22:20:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:13.370 Initializing NVMe Controllers 00:32:13.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.370 Controller IO queue size 128, less than required. 00:32:13.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.370 Initialization complete. Launching workers. 00:32:13.370 ======================================================== 00:32:13.370 Latency(us) 00:32:13.370 Device Information : IOPS MiB/s Average min max 00:32:13.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1198.84 149.85 107410.57 23395.51 221521.77 00:32:13.370 ======================================================== 00:32:13.370 Total : 1198.84 149.85 107410.57 23395.51 221521.77 00:32:13.370 00:32:13.631 22:20:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:13.631 22:20:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2f00863c-efc2-4e2c-b7d5-6c057ead3b9d 00:32:15.542 22:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:15.542 22:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 721ac925-08c5-4ea5-a4e0-606300f5c9ad 00:32:15.542 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.802 rmmod nvme_tcp 00:32:15.802 rmmod nvme_fabrics 00:32:15.802 rmmod nvme_keyring 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 3671895 ']' 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 3671895 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3671895 ']' 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3671895 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.802 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3671895 00:32:16.063 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:16.063 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:16.063 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3671895' 00:32:16.063 killing process with pid 3671895 00:32:16.063 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3671895 00:32:16.063 22:20:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3671895 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.972 22:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.515 00:32:20.515 real 1m33.469s 00:32:20.515 user 5m29.822s 00:32:20.515 sys 0m16.156s 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:20.515 ************************************ 00:32:20.515 END TEST nvmf_perf 00:32:20.515 ************************************ 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.515 ************************************ 00:32:20.515 START TEST nvmf_fio_host 00:32:20.515 ************************************ 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:20.515 * Looking for test storage... 00:32:20.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:20.515 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.516 --rc genhtml_branch_coverage=1 00:32:20.516 --rc genhtml_function_coverage=1 00:32:20.516 --rc genhtml_legend=1 00:32:20.516 --rc geninfo_all_blocks=1 00:32:20.516 --rc geninfo_unexecuted_blocks=1 00:32:20.516 00:32:20.516 ' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.516 --rc genhtml_branch_coverage=1 00:32:20.516 --rc genhtml_function_coverage=1 00:32:20.516 --rc genhtml_legend=1 00:32:20.516 --rc geninfo_all_blocks=1 00:32:20.516 --rc geninfo_unexecuted_blocks=1 00:32:20.516 00:32:20.516 ' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.516 --rc genhtml_branch_coverage=1 00:32:20.516 --rc genhtml_function_coverage=1 00:32:20.516 --rc genhtml_legend=1 00:32:20.516 --rc geninfo_all_blocks=1 00:32:20.516 --rc geninfo_unexecuted_blocks=1 00:32:20.516 00:32:20.516 ' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.516 --rc genhtml_branch_coverage=1 00:32:20.516 --rc genhtml_function_coverage=1 00:32:20.516 --rc genhtml_legend=1 00:32:20.516 --rc geninfo_all_blocks=1 00:32:20.516 --rc geninfo_unexecuted_blocks=1 00:32:20.516 00:32:20.516 ' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.516 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:20.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.517 22:20:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:32:28.661 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:28.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:28.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:28.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:28.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:28.662 22:20:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:28.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:28.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:32:28.662 00:32:28.662 --- 10.0.0.2 ping statistics --- 00:32:28.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.662 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:28.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:28.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:28.662 00:32:28.662 --- 10.0.0.1 ping statistics --- 00:32:28.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:28.662 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3691766 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3691766 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3691766 ']' 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.662 22:20:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.662 [2024-10-12 22:20:46.273003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:28.662 [2024-10-12 22:20:46.273072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.662 [2024-10-12 22:20:46.364960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:28.662 [2024-10-12 22:20:46.413869] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.662 [2024-10-12 22:20:46.413922] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.662 [2024-10-12 22:20:46.413930] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.662 [2024-10-12 22:20:46.413937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.662 [2024-10-12 22:20:46.413943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.662 [2024-10-12 22:20:46.414094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:32:28.662 [2024-10-12 22:20:46.414254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:32:28.662 [2024-10-12 22:20:46.414439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.662 [2024-10-12 22:20:46.414441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:32:28.662 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.662 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:32:28.662 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:28.924 [2024-10-12 22:20:47.269369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.924 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:28.924 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:28.924 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.924 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:29.184 Malloc1 00:32:29.184 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.445 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:29.706 22:20:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.706 [2024-10-12 22:20:48.134279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.706 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:29.967 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:29.968 22:20:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:30.539 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:30.539 fio-3.35 00:32:30.539 Starting 1 thread 00:32:33.086 00:32:33.086 test: (groupid=0, jobs=1): err= 0: pid=3692606: Sat Oct 12 22:20:51 2024 00:32:33.086 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:32:33.086 slat (usec): min=2, max=259, avg= 2.15, stdev= 2.19 00:32:33.086 clat (usec): min=3307, max=8627, avg=5111.83, stdev=382.47 00:32:33.086 lat (usec): min=3345, max=8629, avg=5113.98, stdev=382.61 00:32:33.086 clat percentiles (usec): 00:32:33.086 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:32:33.086 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:32:33.086 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:32:33.086 | 99.00th=[ 6063], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 8029], 00:32:33.086 | 99.99th=[ 8586] 00:32:33.086 bw ( KiB/s): min=52184, max=55944, per=99.94%, avg=54946.00, stdev=1842.50, samples=4 00:32:33.086 iops : min=13046, max=13986, avg=13736.50, stdev=460.62, samples=4 00:32:33.086 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec); 0 zone resets 00:32:33.086 slat (usec): min=2, max=242, avg= 2.22, stdev= 1.61 00:32:33.086 clat (usec): min=2614, max=7582, avg=4144.36, stdev=320.30 00:32:33.086 lat (usec): min=2630, max=7584, avg=4146.57, stdev=320.50 00:32:33.086 clat percentiles (usec): 00:32:33.086 | 1.00th=[ 3458], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:32:33.086 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:32:33.086 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:32:33.086 | 99.00th=[ 4883], 99.50th=[ 5604], 99.90th=[ 6587], 99.95th=[ 6980], 00:32:33.086 | 99.99th=[ 7504] 00:32:33.086 bw ( KiB/s): min=52536, max=55752, per=100.00%, avg=54880.00, stdev=1567.47, samples=4 00:32:33.086 iops : min=13134, max=13938, avg=13720.00, stdev=391.87, samples=4 00:32:33.086 lat (msec) : 4=15.70%, 10=84.30% 00:32:33.086 cpu : usr=73.54%, sys=25.16%, ctx=52, majf=0, minf=17 00:32:33.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:33.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:33.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:33.086 issued rwts: total=27544,27496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:33.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:33.086 00:32:33.086 Run status group 0 (all jobs): 00:32:33.086 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:32:33.086 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:33.086 22:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:33.086 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:33.086 fio-3.35 00:32:33.086 Starting 1 thread 00:32:35.634 00:32:35.634 test: (groupid=0, jobs=1): err= 0: pid=3693161: Sat Oct 12 22:20:53 2024 00:32:35.634 read: IOPS=9693, BW=151MiB/s (159MB/s)(304MiB/2004msec) 00:32:35.634 slat (usec): min=3, max=109, avg= 3.60, stdev= 1.63 00:32:35.634 clat (usec): min=2611, max=16005, avg=7955.37, stdev=1991.98 00:32:35.634 lat (usec): min=2614, max=16008, avg=7958.98, stdev=1992.18 00:32:35.634 clat percentiles (usec): 00:32:35.634 | 1.00th=[ 3949], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:32:35.634 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8291], 00:32:35.634 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:32:35.634 | 99.00th=[12780], 99.50th=[13698], 99.90th=[15533], 99.95th=[15795], 00:32:35.634 | 99.99th=[15926] 00:32:35.634 bw ( KiB/s): min=70112, max=90091, per=49.70%, avg=77090.75, stdev=8940.61, samples=4 00:32:35.634 iops : min= 4382, max= 5630, avg=4818.00, stdev=558.46, samples=4 00:32:35.634 write: IOPS=5718, BW=89.3MiB/s (93.7MB/s)(158MiB/1763msec); 0 zone resets 00:32:35.634 slat (usec): min=39, max=458, avg=41.13, stdev= 9.31 00:32:35.634 clat (usec): min=2727, max=17401, avg=9013.47, stdev=1436.83 00:32:35.634 lat (usec): min=2767, max=17538, avg=9054.60, stdev=1439.83 00:32:35.634 clat percentiles (usec): 00:32:35.634 | 1.00th=[ 5800], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7832], 00:32:35.634 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:32:35.634 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:32:35.634 | 99.00th=[12780], 99.50th=[14615], 99.90th=[16581], 99.95th=[17171], 00:32:35.634 | 99.99th=[17433] 00:32:35.634 bw ( KiB/s): min=74880, max=92646, per=87.75%, avg=80281.50, stdev=8334.58, samples=4 00:32:35.634 iops : min= 4680, max= 5790, avg=5017.50, stdev=520.73, samples=4 00:32:35.634 lat (msec) : 4=0.76%, 10=80.85%, 20=18.40% 00:32:35.634 cpu : usr=86.92%, sys=12.18%, ctx=15, majf=0, minf=37 00:32:35.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:32:35.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:35.635 issued rwts: total=19426,10081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:35.635 00:32:35.635 Run status group 0 (all jobs): 00:32:35.635 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=304MiB (318MB), run=2004-2004msec 00:32:35.635 WRITE: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=158MiB (165MB), run=1763-1763msec 00:32:35.635 22:20:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:35.635 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:35.895 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:35.895 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:32:35.895 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:32:36.156 Nvme0n1 00:32:36.156 22:20:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:36.727 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=63865a94-e0bb-47d6-bd68-58f1ec84cc21 00:32:36.728 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 63865a94-e0bb-47d6-bd68-58f1ec84cc21 00:32:36.728 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=63865a94-e0bb-47d6-bd68-58f1ec84cc21 00:32:36.728 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:36.728 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:36.988 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:36.989 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:36.989 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:36.989 { 00:32:36.989 "uuid": "63865a94-e0bb-47d6-bd68-58f1ec84cc21", 00:32:36.989 "name": "lvs_0", 00:32:36.989 "base_bdev": "Nvme0n1", 00:32:36.989 "total_data_clusters": 1787, 00:32:36.989 "free_clusters": 1787, 00:32:36.989 "block_size": 512, 00:32:36.989 "cluster_size": 1073741824 00:32:36.989 } 00:32:36.989 ]' 00:32:36.989 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="63865a94-e0bb-47d6-bd68-58f1ec84cc21") .free_clusters' 00:32:36.989 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:32:36.989 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="63865a94-e0bb-47d6-bd68-58f1ec84cc21") .cluster_size' 00:32:37.250 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:37.250 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:32:37.250 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:32:37.250 1829888 00:32:37.250 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:32:37.250 a016e9f2-16a7-4ffb-a565-c3496a140004 00:32:37.250 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:37.511 22:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:37.772 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:38.059 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:38.059 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:38.059 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:38.059 22:20:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:38.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:38.324 fio-3.35 00:32:38.324 Starting 1 thread 00:32:40.868 00:32:40.868 test: (groupid=0, jobs=1): err= 0: pid=3694307: Sat Oct 12 22:20:58 2024 00:32:40.868 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2005msec) 00:32:40.868 slat (usec): min=2, max=113, avg= 2.21, stdev= 1.12 00:32:40.869 clat (usec): min=2516, max=10886, avg=6800.69, stdev=498.87 00:32:40.869 lat (usec): min=2534, max=10888, avg=6802.90, stdev=498.81 00:32:40.869 clat percentiles (usec): 00:32:40.869 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:32:40.869 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:32:40.869 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7570], 00:32:40.869 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[ 8979], 99.95th=[ 9372], 00:32:40.869 | 99.99th=[10159] 00:32:40.869 bw ( KiB/s): min=40288, max=42104, per=99.88%, avg=41466.00, stdev=811.03, samples=4 00:32:40.869 iops : min=10072, max=10526, avg=10366.50, stdev=202.76, samples=4 00:32:40.869 write: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(81.3MiB/2005msec); 0 zone resets 00:32:40.869 slat (nsec): min=2082, max=112891, avg=2278.40, stdev=819.11 00:32:40.869 clat (usec): min=1056, max=10052, avg=5440.26, stdev=434.83 00:32:40.869 lat (usec): min=1063, max=10054, avg=5442.54, stdev=434.81 00:32:40.869 clat percentiles (usec): 00:32:40.869 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5080], 00:32:40.869 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:32:40.869 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6128], 00:32:40.869 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 8029], 99.95th=[ 8979], 00:32:40.869 | 99.99th=[10028] 00:32:40.869 bw ( KiB/s): min=40784, max=41952, per=100.00%, avg=41532.00, stdev=519.01, samples=4 00:32:40.869 iops : min=10196, max=10488, avg=10383.00, stdev=129.75, samples=4 00:32:40.869 lat (msec) : 2=0.02%, 4=0.11%, 10=99.85%, 20=0.02% 00:32:40.869 cpu : usr=73.95%, sys=25.15%, ctx=41, majf=0, minf=20 00:32:40.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:40.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:40.869 issued rwts: total=20809,20817,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:40.869 00:32:40.869 Run status group 0 (all jobs): 00:32:40.869 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.2MB), run=2005-2005msec 00:32:40.869 WRITE: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.3MB), run=2005-2005msec 00:32:40.869 22:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:40.869 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=0fb0c685-9772-4512-bd17-ff9765946c98 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 0fb0c685-9772-4512-bd17-ff9765946c98 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=0fb0c685-9772-4512-bd17-ff9765946c98 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:41.810 22:20:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:41.810 { 00:32:41.810 "uuid": "63865a94-e0bb-47d6-bd68-58f1ec84cc21", 00:32:41.810 "name": "lvs_0", 00:32:41.810 "base_bdev": "Nvme0n1", 00:32:41.810 "total_data_clusters": 1787, 00:32:41.810 "free_clusters": 0, 00:32:41.810 "block_size": 512, 00:32:41.810 "cluster_size": 1073741824 00:32:41.810 }, 00:32:41.810 { 00:32:41.810 "uuid": "0fb0c685-9772-4512-bd17-ff9765946c98", 00:32:41.810 "name": "lvs_n_0", 00:32:41.810 "base_bdev": "a016e9f2-16a7-4ffb-a565-c3496a140004", 00:32:41.810 "total_data_clusters": 457025, 00:32:41.810 "free_clusters": 457025, 00:32:41.810 "block_size": 512, 00:32:41.810 "cluster_size": 4194304 00:32:41.810 } 00:32:41.810 ]' 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0fb0c685-9772-4512-bd17-ff9765946c98") .free_clusters' 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0fb0c685-9772-4512-bd17-ff9765946c98") .cluster_size' 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:32:41.810 1828100 00:32:41.810 22:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:32:42.753 47ea1ad4-bfc0-4da0-8936-76fa906572f0 00:32:42.754 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:42.754 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:43.015 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:43.275 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:43.276 22:21:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:43.536 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:43.536 fio-3.35 00:32:43.536 Starting 1 thread 00:32:46.079 00:32:46.079 test: (groupid=0, jobs=1): err= 0: pid=3695617: Sat Oct 12 22:21:04 2024 00:32:46.079 read: IOPS=9275, BW=36.2MiB/s (38.0MB/s)(72.7MiB/2006msec) 00:32:46.079 slat (usec): min=2, max=105, avg= 2.27, stdev= 1.14 00:32:46.079 clat (usec): min=2103, max=12605, avg=7633.15, stdev=603.32 00:32:46.079 lat (usec): min=2120, max=12607, avg=7635.42, stdev=603.24 00:32:46.079 clat percentiles (usec): 00:32:46.079 | 1.00th=[ 6194], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:32:46.079 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:32:46.079 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:32:46.079 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10814], 99.95th=[11731], 00:32:46.079 | 99.99th=[12649] 00:32:46.079 bw ( KiB/s): min=36072, max=37696, per=99.93%, avg=37076.00, stdev=700.37, samples=4 00:32:46.079 iops : min= 9018, max= 9424, avg=9269.00, stdev=175.09, samples=4 00:32:46.079 write: IOPS=9282, BW=36.3MiB/s (38.0MB/s)(72.7MiB/2006msec); 0 zone resets 00:32:46.079 slat (usec): min=2, max=114, avg= 2.34, stdev= 1.01 00:32:46.079 clat (usec): min=1019, max=11439, avg=6088.42, stdev=533.52 00:32:46.079 lat (usec): min=1026, max=11442, avg=6090.76, stdev=533.53 00:32:46.079 clat percentiles (usec): 00:32:46.079 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:32:46.079 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:32:46.079 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6849], 00:32:46.079 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[ 9110], 99.95th=[10552], 00:32:46.079 | 99.99th=[10945] 00:32:46.079 bw ( KiB/s): min=36904, max=37376, per=99.99%, avg=37124.00, stdev=224.81, samples=4 00:32:46.079 iops : min= 9226, max= 9344, avg=9281.00, stdev=56.20, samples=4 00:32:46.079 lat (msec) : 2=0.01%, 4=0.11%, 10=99.76%, 20=0.12% 00:32:46.079 cpu : usr=73.82%, sys=25.49%, ctx=36, majf=0, minf=20 00:32:46.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:46.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.079 issued rwts: total=18607,18620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.079 00:32:46.079 Run status group 0 (all jobs): 00:32:46.079 READ: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.2MB), run=2006-2006msec 00:32:46.079 WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.3MB), run=2006-2006msec 00:32:46.079 22:21:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:46.079 22:21:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:46.079 22:21:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:47.990 22:21:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:48.250 22:21:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:48.820 22:21:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:49.081 22:21:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:51.005 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.006 rmmod nvme_tcp 00:32:51.006 rmmod nvme_fabrics 00:32:51.006 rmmod nvme_keyring 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 3691766 ']' 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 3691766 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3691766 ']' 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3691766 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:51.006 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3691766 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3691766' 00:32:51.266 killing process with pid 3691766 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3691766 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3691766 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.266 22:21:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.891 00:32:53.891 real 0m33.268s 00:32:53.891 user 2m37.805s 00:32:53.891 sys 0m10.120s 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.891 ************************************ 00:32:53.891 END TEST nvmf_fio_host 00:32:53.891 ************************************ 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.891 ************************************ 00:32:53.891 START TEST nvmf_failover 00:32:53.891 ************************************ 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:53.891 * Looking for test storage... 00:32:53.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:32:53.891 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:53.892 22:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.892 --rc genhtml_branch_coverage=1 00:32:53.892 --rc genhtml_function_coverage=1 00:32:53.892 --rc genhtml_legend=1 00:32:53.892 --rc geninfo_all_blocks=1 00:32:53.892 --rc geninfo_unexecuted_blocks=1 00:32:53.892 00:32:53.892 ' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.892 --rc genhtml_branch_coverage=1 00:32:53.892 --rc genhtml_function_coverage=1 00:32:53.892 --rc genhtml_legend=1 00:32:53.892 --rc geninfo_all_blocks=1 00:32:53.892 --rc geninfo_unexecuted_blocks=1 00:32:53.892 00:32:53.892 ' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.892 --rc genhtml_branch_coverage=1 00:32:53.892 --rc genhtml_function_coverage=1 00:32:53.892 --rc genhtml_legend=1 00:32:53.892 --rc geninfo_all_blocks=1 00:32:53.892 --rc geninfo_unexecuted_blocks=1 00:32:53.892 00:32:53.892 ' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:53.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.892 --rc genhtml_branch_coverage=1 00:32:53.892 --rc genhtml_function_coverage=1 00:32:53.892 --rc genhtml_legend=1 00:32:53.892 --rc geninfo_all_blocks=1 00:32:53.892 --rc geninfo_unexecuted_blocks=1 00:32:53.892 00:32:53.892 ' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:53.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.892 22:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:02.088 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:02.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:02.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:02.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:02.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:02.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:33:02.089 00:33:02.089 --- 10.0.0.2 ping statistics --- 00:33:02.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.089 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:02.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:33:02.089 00:33:02.089 --- 10.0.0.1 ping statistics --- 00:33:02.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.089 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=3701717 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 3701717 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3701717 ']' 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:02.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:02.089 22:21:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:02.089 [2024-10-12 22:21:19.602550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:02.089 [2024-10-12 22:21:19.602613] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:02.089 [2024-10-12 22:21:19.693162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:02.089 [2024-10-12 22:21:19.741443] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:02.089 [2024-10-12 22:21:19.741498] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:02.089 [2024-10-12 22:21:19.741506] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:02.089 [2024-10-12 22:21:19.741513] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:02.089 [2024-10-12 22:21:19.741519] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:02.089 [2024-10-12 22:21:19.741683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:33:02.089 [2024-10-12 22:21:19.741833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.089 [2024-10-12 22:21:19.741833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:02.090 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:02.351 [2024-10-12 22:21:20.631091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:02.351 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:02.612 Malloc0 00:33:02.612 22:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.612 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:02.873 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:03.133 [2024-10-12 22:21:21.446383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:03.133 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:03.394 [2024-10-12 22:21:21.638985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:03.394 [2024-10-12 22:21:21.827649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3702101 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3702101 /var/tmp/bdevperf.sock 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3702101 ']' 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:03.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.394 22:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.654 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:03.654 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:03.654 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:03.915 NVMe0n1 00:33:03.915 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:04.175 00:33:04.436 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:04.436 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3702411 00:33:04.436 22:21:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:05.376 22:21:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.376 [2024-10-12 22:21:23.828927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.376 [2024-10-12 22:21:23.828992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.828997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 [2024-10-12 22:21:23.829221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbf940 is same with the state(6) to be set 00:33:05.377 22:21:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:08.673 22:21:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:08.934 00:33:08.934 22:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:09.196 [2024-10-12 22:21:27.454814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.454996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.196 [2024-10-12 22:21:27.455176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 [2024-10-12 22:21:27.455222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc06f0 is same with the state(6) to be set 00:33:09.197 22:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:12.491 22:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.491 [2024-10-12 22:21:30.648065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.491 22:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:13.431 22:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:13.431 [2024-10-12 22:21:31.834243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.431 [2024-10-12 22:21:31.834315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.432 [2024-10-12 22:21:31.834724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 [2024-10-12 22:21:31.834728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 [2024-10-12 22:21:31.834733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 [2024-10-12 22:21:31.834737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 [2024-10-12 22:21:31.834741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 [2024-10-12 22:21:31.834746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1300 is same with the state(6) to be set 00:33:13.433 22:21:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3702411 00:33:20.018 { 00:33:20.018 "results": [ 00:33:20.018 { 00:33:20.018 "job": "NVMe0n1", 00:33:20.018 "core_mask": "0x1", 00:33:20.018 "workload": "verify", 00:33:20.018 "status": "finished", 00:33:20.018 "verify_range": { 00:33:20.018 "start": 0, 00:33:20.018 "length": 16384 00:33:20.018 }, 00:33:20.018 "queue_depth": 128, 00:33:20.018 "io_size": 4096, 00:33:20.018 "runtime": 15.044702, 00:33:20.018 "iops": 12278.342236356692, 00:33:20.018 "mibps": 47.962274360768326, 00:33:20.018 "io_failed": 9597, 00:33:20.018 "io_timeout": 0, 00:33:20.018 "avg_latency_us": 9862.502127647895, 00:33:20.018 "min_latency_us": 542.72, 00:33:20.018 "max_latency_us": 43472.21333333333 00:33:20.018 } 00:33:20.018 ], 00:33:20.018 "core_count": 1 00:33:20.018 } 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3702101 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3702101 ']' 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3702101 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3702101 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3702101' 00:33:20.018 killing process with pid 3702101 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3702101 00:33:20.018 22:21:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3702101 00:33:20.018 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:20.018 [2024-10-12 22:21:21.905539] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:20.018 [2024-10-12 22:21:21.905617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702101 ] 00:33:20.018 [2024-10-12 22:21:21.985487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.018 [2024-10-12 22:21:22.016167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.018 Running I/O for 15 seconds... 00:33:20.018 11064.00 IOPS, 43.22 MiB/s [2024-10-12T20:21:38.507Z] [2024-10-12 22:21:23.829705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.829984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.829991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.018 [2024-10-12 22:21:23.830195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.018 [2024-10-12 22:21:23.830202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.019 [2024-10-12 22:21:23.830902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.019 [2024-10-12 22:21:23.830912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.830919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.830928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.830945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.830952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.830961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.830969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.830978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.830986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.830995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.020 [2024-10-12 22:21:23.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.020 [2024-10-12 22:21:23.831587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.020 [2024-10-12 22:21:23.831596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:23.831603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:23.831620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:23.831637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.021 [2024-10-12 22:21:23.831905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.021 [2024-10-12 22:21:23.831933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.021 [2024-10-12 22:21:23.831939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:33:20.021 [2024-10-12 22:21:23.831948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.831982] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1857530 was disconnected and freed. reset controller. 00:33:20.021 [2024-10-12 22:21:23.831992] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:20.021 [2024-10-12 22:21:23.832012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.021 [2024-10-12 22:21:23.832021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.832029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.021 [2024-10-12 22:21:23.832037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.832045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.021 [2024-10-12 22:21:23.832052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.832061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.021 [2024-10-12 22:21:23.832068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:23.832076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.021 [2024-10-12 22:21:23.835626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.021 [2024-10-12 22:21:23.835651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836e40 (9): Bad file descriptor 00:33:20.021 [2024-10-12 22:21:24.006481] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:20.021 10200.00 IOPS, 39.84 MiB/s [2024-10-12T20:21:38.510Z] 10536.00 IOPS, 41.16 MiB/s [2024-10-12T20:21:38.510Z] 10687.00 IOPS, 41.75 MiB/s [2024-10-12T20:21:38.510Z] [2024-10-12 22:21:27.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.021 [2024-10-12 22:21:27.455875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.021 [2024-10-12 22:21:27.455884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.455989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.455994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.022 [2024-10-12 22:21:27.456293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.022 [2024-10-12 22:21:27.456298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.023 [2024-10-12 22:21:27.456689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.023 [2024-10-12 22:21:27.456791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.023 [2024-10-12 22:21:27.456796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.456991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.456998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.024 [2024-10-12 22:21:27.457213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.024 [2024-10-12 22:21:27.457235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.024 [2024-10-12 22:21:27.457242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70104 len:8 PRP1 0x0 PRP2 0x0 00:33:20.024 [2024-10-12 22:21:27.457248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457280] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18592e0 was disconnected and freed. reset controller. 00:33:20.024 [2024-10-12 22:21:27.457287] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:20.024 [2024-10-12 22:21:27.457303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-10-12 22:21:27.457309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-10-12 22:21:27.457320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.024 [2024-10-12 22:21:27.457325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.024 [2024-10-12 22:21:27.457330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:27.457336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.025 [2024-10-12 22:21:27.457340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:27.457345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.025 [2024-10-12 22:21:27.459773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.025 [2024-10-12 22:21:27.459793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836e40 (9): Bad file descriptor 00:33:20.025 [2024-10-12 22:21:27.492460] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:20.025 11033.20 IOPS, 43.10 MiB/s [2024-10-12T20:21:38.514Z] 11376.67 IOPS, 44.44 MiB/s [2024-10-12T20:21:38.514Z] 11601.43 IOPS, 45.32 MiB/s [2024-10-12T20:21:38.514Z] 11740.25 IOPS, 45.86 MiB/s [2024-10-12T20:21:38.514Z] 11876.56 IOPS, 46.39 MiB/s [2024-10-12T20:21:38.514Z] [2024-10-12 22:21:31.836312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.025 [2024-10-12 22:21:31.836680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.025 [2024-10-12 22:21:31.836774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.025 [2024-10-12 22:21:31.836781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.836988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.836994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.026 [2024-10-12 22:21:31.837165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.026 [2024-10-12 22:21:31.837170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.027 [2024-10-12 22:21:31.837313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8784 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8792 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8808 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8816 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8824 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8840 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8848 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8856 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8872 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8880 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8888 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8904 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.027 [2024-10-12 22:21:31.837633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.027 [2024-10-12 22:21:31.837637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8912 len:8 PRP1 0x0 PRP2 0x0 00:33:20.027 [2024-10-12 22:21:31.837642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.027 [2024-10-12 22:21:31.837647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8920 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8936 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8944 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8952 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8968 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8976 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8984 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.837839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9000 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.837844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.837850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.837853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9008 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9016 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9032 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9040 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9048 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9064 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9072 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9080 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9096 len:8 PRP1 0x0 PRP2 0x0 00:33:20.028 [2024-10-12 22:21:31.850618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.028 [2024-10-12 22:21:31.850625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.028 [2024-10-12 22:21:31.850630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.028 [2024-10-12 22:21:31.850636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9104 len:8 PRP1 0x0 PRP2 0x0 00:33:20.029 [2024-10-12 22:21:31.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.029 [2024-10-12 22:21:31.850654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.029 [2024-10-12 22:21:31.850660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9112 len:8 PRP1 0x0 PRP2 0x0 00:33:20.029 [2024-10-12 22:21:31.850666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.029 [2024-10-12 22:21:31.850679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.029 [2024-10-12 22:21:31.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:8 PRP1 0x0 PRP2 0x0 00:33:20.029 [2024-10-12 22:21:31.850691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.029 [2024-10-12 22:21:31.850702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.029 [2024-10-12 22:21:31.850708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9128 len:8 PRP1 0x0 PRP2 0x0 00:33:20.029 [2024-10-12 22:21:31.850715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.029 [2024-10-12 22:21:31.850727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.029 [2024-10-12 22:21:31.850732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9136 len:8 PRP1 0x0 PRP2 0x0 00:33:20.029 [2024-10-12 22:21:31.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850780] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1865c00 was disconnected and freed. reset controller. 00:33:20.029 [2024-10-12 22:21:31.850789] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:20.029 [2024-10-12 22:21:31.850817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.029 [2024-10-12 22:21:31.850825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.029 [2024-10-12 22:21:31.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.029 [2024-10-12 22:21:31.850856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:20.029 [2024-10-12 22:21:31.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.029 [2024-10-12 22:21:31.850878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.029 [2024-10-12 22:21:31.850917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1836e40 (9): Bad file descriptor 00:33:20.029 [2024-10-12 22:21:31.854153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.029 [2024-10-12 22:21:31.889578] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:20.029 11935.10 IOPS, 46.62 MiB/s [2024-10-12T20:21:38.518Z] 12031.00 IOPS, 47.00 MiB/s [2024-10-12T20:21:38.518Z] 12122.67 IOPS, 47.35 MiB/s [2024-10-12T20:21:38.518Z] 12203.00 IOPS, 47.67 MiB/s [2024-10-12T20:21:38.518Z] 12263.14 IOPS, 47.90 MiB/s [2024-10-12T20:21:38.518Z] 12314.67 IOPS, 48.10 MiB/s 00:33:20.029 Latency(us) 00:33:20.029 [2024-10-12T20:21:38.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.029 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:20.029 Verification LBA range: start 0x0 length 0x4000 00:33:20.029 NVMe0n1 : 15.04 12278.34 47.96 637.90 0.00 9862.50 542.72 43472.21 00:33:20.029 [2024-10-12T20:21:38.518Z] =================================================================================================================== 00:33:20.029 [2024-10-12T20:21:38.518Z] Total : 12278.34 47.96 637.90 0.00 9862.50 542.72 43472.21 00:33:20.029 Received shutdown signal, test time was about 15.000000 seconds 00:33:20.029 00:33:20.029 Latency(us) 00:33:20.029 [2024-10-12T20:21:38.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.029 [2024-10-12T20:21:38.518Z] =================================================================================================================== 00:33:20.029 [2024-10-12T20:21:38.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3705277 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3705277 /var/tmp/bdevperf.sock 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3705277 ']' 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:20.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:20.029 [2024-10-12 22:21:38.430352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:20.029 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:20.288 [2024-10-12 22:21:38.614843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:20.289 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:20.548 NVMe0n1 00:33:20.548 22:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:21.115 00:33:21.115 22:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:21.115 00:33:21.374 22:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:21.375 22:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:21.375 22:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:21.634 22:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:24.925 22:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:24.925 22:21:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:24.925 22:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3706224 00:33:24.925 22:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:24.925 22:21:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3706224 00:33:25.866 { 00:33:25.866 "results": [ 00:33:25.866 { 00:33:25.866 "job": "NVMe0n1", 00:33:25.866 "core_mask": "0x1", 00:33:25.866 "workload": "verify", 00:33:25.866 "status": "finished", 00:33:25.866 "verify_range": { 00:33:25.866 "start": 0, 00:33:25.866 "length": 16384 00:33:25.866 }, 00:33:25.866 "queue_depth": 128, 00:33:25.866 "io_size": 4096, 00:33:25.866 "runtime": 1.004003, 00:33:25.866 "iops": 13036.813635019018, 00:33:25.866 "mibps": 50.92505326179304, 00:33:25.866 "io_failed": 0, 00:33:25.866 "io_timeout": 0, 00:33:25.866 "avg_latency_us": 9782.827171925535, 00:33:25.866 "min_latency_us": 2143.5733333333333, 00:33:25.866 "max_latency_us": 9830.4 00:33:25.866 } 00:33:25.866 ], 00:33:25.866 "core_count": 1 00:33:25.866 } 00:33:25.866 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.866 [2024-10-12 22:21:38.093406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:25.866 [2024-10-12 22:21:38.093467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705277 ] 00:33:25.866 [2024-10-12 22:21:38.169595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.866 [2024-10-12 22:21:38.197160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.866 [2024-10-12 22:21:39.956683] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:25.866 [2024-10-12 22:21:39.956718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:25.866 [2024-10-12 22:21:39.956727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.866 [2024-10-12 22:21:39.956734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:25.866 [2024-10-12 22:21:39.956739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.866 [2024-10-12 22:21:39.956745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:25.866 [2024-10-12 22:21:39.956750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.866 [2024-10-12 22:21:39.956755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:25.866 [2024-10-12 22:21:39.956760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.866 [2024-10-12 22:21:39.956771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:25.866 [2024-10-12 22:21:39.956793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:25.866 [2024-10-12 22:21:39.956804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe78e40 (9): Bad file descriptor 00:33:25.866 [2024-10-12 22:21:40.100358] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:25.866 Running I/O for 1 seconds... 00:33:25.866 12961.00 IOPS, 50.63 MiB/s 00:33:25.866 Latency(us) 00:33:25.866 [2024-10-12T20:21:44.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.866 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:25.866 Verification LBA range: start 0x0 length 0x4000 00:33:25.866 NVMe0n1 : 1.00 13036.81 50.93 0.00 0.00 9782.83 2143.57 9830.40 00:33:25.866 [2024-10-12T20:21:44.355Z] =================================================================================================================== 00:33:25.866 [2024-10-12T20:21:44.355Z] Total : 13036.81 50.93 0.00 0.00 9782.83 2143.57 9830.40 00:33:25.866 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:25.866 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:26.127 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:26.386 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:26.386 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:26.386 22:21:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:26.645 22:21:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3705277 ']' 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705277' 00:33:29.938 killing process with pid 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3705277 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:29.938 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.197 rmmod nvme_tcp 00:33:30.197 rmmod nvme_fabrics 00:33:30.197 rmmod nvme_keyring 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 3701717 ']' 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 3701717 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3701717 ']' 00:33:30.197 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3701717 00:33:30.198 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:33:30.198 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:30.198 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3701717 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3701717' 00:33:30.457 killing process with pid 3701717 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3701717 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3701717 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.457 22:21:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:33.000 00:33:33.000 real 0m39.123s 00:33:33.000 user 1m58.888s 00:33:33.000 sys 0m8.699s 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:33.000 ************************************ 00:33:33.000 END TEST nvmf_failover 00:33:33.000 ************************************ 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:33.000 22:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:33.000 ************************************ 00:33:33.000 START TEST nvmf_host_discovery 00:33:33.000 ************************************ 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:33.000 * Looking for test storage... 00:33:33.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.000 --rc genhtml_branch_coverage=1 00:33:33.000 --rc genhtml_function_coverage=1 00:33:33.000 --rc genhtml_legend=1 00:33:33.000 --rc geninfo_all_blocks=1 00:33:33.000 --rc geninfo_unexecuted_blocks=1 00:33:33.000 00:33:33.000 ' 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.000 --rc genhtml_branch_coverage=1 00:33:33.000 --rc genhtml_function_coverage=1 00:33:33.000 --rc genhtml_legend=1 00:33:33.000 --rc geninfo_all_blocks=1 00:33:33.000 --rc geninfo_unexecuted_blocks=1 00:33:33.000 00:33:33.000 ' 00:33:33.000 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.000 --rc genhtml_branch_coverage=1 00:33:33.000 --rc genhtml_function_coverage=1 00:33:33.000 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:33.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.001 --rc genhtml_branch_coverage=1 00:33:33.001 --rc genhtml_function_coverage=1 00:33:33.001 --rc genhtml_legend=1 00:33:33.001 --rc geninfo_all_blocks=1 00:33:33.001 --rc geninfo_unexecuted_blocks=1 00:33:33.001 00:33:33.001 ' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:33.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:33.001 22:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:41.135 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:41.135 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:33:41.135 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:41.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:41.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:33:41.136 00:33:41.136 --- 10.0.0.2 ping statistics --- 00:33:41.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.136 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:33:41.136 00:33:41.136 --- 10.0.0.1 ping statistics --- 00:33:41.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.136 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=3711446 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 3711446 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3711446 ']' 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.136 22:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.136 [2024-10-12 22:21:58.795642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:41.136 [2024-10-12 22:21:58.795711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.136 [2024-10-12 22:21:58.884168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.136 [2024-10-12 22:21:58.917026] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.136 [2024-10-12 22:21:58.917066] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.136 [2024-10-12 22:21:58.917076] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.136 [2024-10-12 22:21:58.917083] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.136 [2024-10-12 22:21:58.917090] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.136 [2024-10-12 22:21:58.917118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.136 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.136 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:41.136 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:33:41.136 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.136 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 [2024-10-12 22:21:59.630571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 [2024-10-12 22:21:59.642741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 null0 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 null1 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3711737 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3711737 /tmp/host.sock 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3711737 ']' 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:41.396 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.396 22:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.396 [2024-10-12 22:21:59.735204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:41.396 [2024-10-12 22:21:59.735252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711737 ] 00:33:41.396 [2024-10-12 22:21:59.814080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.396 [2024-10-12 22:21:59.846512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:42.337 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:42.338 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 [2024-10-12 22:22:00.897943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:42.599 22:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:42.599 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:42.600 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.860 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:33:42.860 22:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:43.120 [2024-10-12 22:22:01.599251] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:43.120 [2024-10-12 22:22:01.599277] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:43.120 [2024-10-12 22:22:01.599291] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.381 [2024-10-12 22:22:01.686561] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:43.641 [2024-10-12 22:22:01.871284] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:43.641 [2024-10-12 22:22:01.871308] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.641 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:43.903 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.904 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.165 [2024-10-12 22:22:02.413968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:44.165 [2024-10-12 22:22:02.414982] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:44.165 [2024-10-12 22:22:02.415007] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:44.165 [2024-10-12 22:22:02.503283] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:44.165 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.166 [2024-10-12 22:22:02.563065] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:44.166 [2024-10-12 22:22:02.563083] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:44.166 [2024-10-12 22:22:02.563093] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:44.166 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:44.166 22:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.106 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.368 [2024-10-12 22:22:03.685382] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:45.368 [2024-10-12 22:22:03.685410] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:45.368 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:45.369 [2024-10-12 22:22:03.693137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.369 [2024-10-12 22:22:03.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.369 [2024-10-12 22:22:03.693174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.369 [2024-10-12 22:22:03.693182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.369 [2024-10-12 22:22:03.693190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.369 [2024-10-12 22:22:03.693197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.369 [2024-10-12 22:22:03.693205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.369 [2024-10-12 22:22:03.693212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.369 [2024-10-12 22:22:03.693220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.369 [2024-10-12 22:22:03.703167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.369 [2024-10-12 22:22:03.713201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.713533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.713543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.713549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.713557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.713574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.713584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.713590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.713599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 [2024-10-12 22:22:03.723248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.723544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.723553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.723558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.723565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.723573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.723577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.723582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.723590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 [2024-10-12 22:22:03.733292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.733473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.733483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.733488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.733496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.733504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.733509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.733514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.733522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:45.369 [2024-10-12 22:22:03.743338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:45.369 [2024-10-12 22:22:03.743632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.743641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.743647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.743654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.743666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.743673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.743683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.743692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.369 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:45.369 [2024-10-12 22:22:03.753382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.753680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.753688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.753693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.753701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.754281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.754290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.754295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.754302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 [2024-10-12 22:22:03.763427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.763745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.763754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.763759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.763767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.763778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.763783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.763788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.763795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 [2024-10-12 22:22:03.773470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.369 [2024-10-12 22:22:03.773768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.369 [2024-10-12 22:22:03.773777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.369 [2024-10-12 22:22:03.773782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.369 [2024-10-12 22:22:03.773790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.369 [2024-10-12 22:22:03.773801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.369 [2024-10-12 22:22:03.773806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.369 [2024-10-12 22:22:03.773811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.369 [2024-10-12 22:22:03.773823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.369 [2024-10-12 22:22:03.783518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.370 [2024-10-12 22:22:03.783849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.370 [2024-10-12 22:22:03.783857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.370 [2024-10-12 22:22:03.783862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.370 [2024-10-12 22:22:03.783870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.370 [2024-10-12 22:22:03.783881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.370 [2024-10-12 22:22:03.783885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.370 [2024-10-12 22:22:03.783890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.370 [2024-10-12 22:22:03.783897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.370 [2024-10-12 22:22:03.793560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.370 [2024-10-12 22:22:03.793852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.370 [2024-10-12 22:22:03.793861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.370 [2024-10-12 22:22:03.793866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.370 [2024-10-12 22:22:03.793874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.370 [2024-10-12 22:22:03.793885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.370 [2024-10-12 22:22:03.793890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.370 [2024-10-12 22:22:03.793895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.370 [2024-10-12 22:22:03.793902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:45.370 [2024-10-12 22:22:03.803604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.370 [2024-10-12 22:22:03.803887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.370 [2024-10-12 22:22:03.803896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.370 [2024-10-12 22:22:03.803901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.370 [2024-10-12 22:22:03.803909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.370 [2024-10-12 22:22:03.803918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.370 [2024-10-12 22:22:03.803924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.370 [2024-10-12 22:22:03.803930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.370 [2024-10-12 22:22:03.803939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.370 [2024-10-12 22:22:03.813651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:45.370 [2024-10-12 22:22:03.813946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.370 [2024-10-12 22:22:03.813954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x196bad0 with addr=10.0.0.2, port=4420 00:33:45.370 [2024-10-12 22:22:03.813959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x196bad0 is same with the state(6) to be set 00:33:45.370 [2024-10-12 22:22:03.813967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196bad0 (9): Bad file descriptor 00:33:45.370 [2024-10-12 22:22:03.813978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:45.370 [2024-10-12 22:22:03.813983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:45.370 [2024-10-12 22:22:03.813988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:45.370 [2024-10-12 22:22:03.813996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.370 [2024-10-12 22:22:03.814046] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:45.370 [2024-10-12 22:22:03.814058] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:45.370 22:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:46.755 22:22:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:33:46.755 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:33:46.756 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:46.756 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.756 22:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.696 [2024-10-12 22:22:06.182288] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:47.696 [2024-10-12 22:22:06.182303] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:47.696 [2024-10-12 22:22:06.182312] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:47.957 [2024-10-12 22:22:06.270572] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:47.957 [2024-10-12 22:22:06.335219] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:47.957 [2024-10-12 22:22:06.335243] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.958 request: 00:33:47.958 { 00:33:47.958 "name": "nvme", 00:33:47.958 "trtype": "tcp", 00:33:47.958 "traddr": "10.0.0.2", 00:33:47.958 "adrfam": "ipv4", 00:33:47.958 "trsvcid": "8009", 00:33:47.958 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:47.958 "wait_for_attach": true, 00:33:47.958 "method": "bdev_nvme_start_discovery", 00:33:47.958 "req_id": 1 00:33:47.958 } 00:33:47.958 Got JSON-RPC error response 00:33:47.958 response: 00:33:47.958 { 00:33:47.958 "code": -17, 00:33:47.958 "message": "File exists" 00:33:47.958 } 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:47.958 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.219 request: 00:33:48.219 { 00:33:48.219 "name": "nvme_second", 00:33:48.219 "trtype": "tcp", 00:33:48.219 "traddr": "10.0.0.2", 00:33:48.219 "adrfam": "ipv4", 00:33:48.219 "trsvcid": "8009", 00:33:48.219 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:48.219 "wait_for_attach": true, 00:33:48.219 "method": "bdev_nvme_start_discovery", 00:33:48.219 "req_id": 1 00:33:48.219 } 00:33:48.219 Got JSON-RPC error response 00:33:48.219 response: 00:33:48.219 { 00:33:48.219 "code": -17, 00:33:48.219 "message": "File exists" 00:33:48.219 } 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.219 22:22:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:49.160 [2024-10-12 22:22:07.582650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:49.160 [2024-10-12 22:22:07.582673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0870 with addr=10.0.0.2, port=8010 00:33:49.160 [2024-10-12 22:22:07.582682] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:49.160 [2024-10-12 22:22:07.582688] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:49.160 [2024-10-12 22:22:07.582693] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:50.102 [2024-10-12 22:22:08.584987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:50.102 [2024-10-12 22:22:08.585010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0870 with addr=10.0.0.2, port=8010 00:33:50.102 [2024-10-12 22:22:08.585019] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:50.102 [2024-10-12 22:22:08.585023] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:50.102 [2024-10-12 22:22:08.585028] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:51.488 [2024-10-12 22:22:09.586990] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:51.488 request: 00:33:51.488 { 00:33:51.488 "name": "nvme_second", 00:33:51.488 "trtype": "tcp", 00:33:51.488 "traddr": "10.0.0.2", 00:33:51.488 "adrfam": "ipv4", 00:33:51.488 "trsvcid": "8010", 00:33:51.488 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:51.488 "wait_for_attach": false, 00:33:51.488 "attach_timeout_ms": 3000, 00:33:51.488 "method": "bdev_nvme_start_discovery", 00:33:51.488 "req_id": 1 00:33:51.488 } 00:33:51.488 Got JSON-RPC error response 00:33:51.488 response: 00:33:51.488 { 00:33:51.488 "code": -110, 00:33:51.488 "message": "Connection timed out" 00:33:51.488 } 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3711737 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:51.488 rmmod nvme_tcp 00:33:51.488 rmmod nvme_fabrics 00:33:51.488 rmmod nvme_keyring 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 3711446 ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3711446 ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3711446' 00:33:51.488 killing process with pid 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3711446 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.488 22:22:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.098 22:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:54.098 00:33:54.098 real 0m20.958s 00:33:54.098 user 0m24.981s 00:33:54.098 sys 0m7.128s 00:33:54.098 22:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:54.098 22:22:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:54.098 ************************************ 00:33:54.098 END TEST nvmf_host_discovery 00:33:54.098 ************************************ 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.098 ************************************ 00:33:54.098 START TEST nvmf_host_multipath_status 00:33:54.098 ************************************ 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:54.098 * Looking for test storage... 00:33:54.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.098 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:54.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.099 --rc genhtml_branch_coverage=1 00:33:54.099 --rc genhtml_function_coverage=1 00:33:54.099 --rc genhtml_legend=1 00:33:54.099 --rc geninfo_all_blocks=1 00:33:54.099 --rc geninfo_unexecuted_blocks=1 00:33:54.099 00:33:54.099 ' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:54.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.099 --rc genhtml_branch_coverage=1 00:33:54.099 --rc genhtml_function_coverage=1 00:33:54.099 --rc genhtml_legend=1 00:33:54.099 --rc geninfo_all_blocks=1 00:33:54.099 --rc geninfo_unexecuted_blocks=1 00:33:54.099 00:33:54.099 ' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:54.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.099 --rc genhtml_branch_coverage=1 00:33:54.099 --rc genhtml_function_coverage=1 00:33:54.099 --rc genhtml_legend=1 00:33:54.099 --rc geninfo_all_blocks=1 00:33:54.099 --rc geninfo_unexecuted_blocks=1 00:33:54.099 00:33:54.099 ' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:54.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.099 --rc genhtml_branch_coverage=1 00:33:54.099 --rc genhtml_function_coverage=1 00:33:54.099 --rc genhtml_legend=1 00:33:54.099 --rc geninfo_all_blocks=1 00:33:54.099 --rc geninfo_unexecuted_blocks=1 00:33:54.099 00:33:54.099 ' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:54.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:33:54.099 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:54.100 22:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:02.277 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:02.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:02.277 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:02.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:02.278 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # is_hw=yes 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:02.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:34:02.278 00:34:02.278 --- 10.0.0.2 ping statistics --- 00:34:02.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.278 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:02.278 00:34:02.278 --- 10.0.0.1 ping statistics --- 00:34:02.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.278 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # return 0 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=3717988 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 3717988 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3717988 ']' 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:02.278 22:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.278 [2024-10-12 22:22:19.886798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:02.278 [2024-10-12 22:22:19.886859] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.278 [2024-10-12 22:22:19.973937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:02.278 [2024-10-12 22:22:20.024898] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.278 [2024-10-12 22:22:20.024955] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.278 [2024-10-12 22:22:20.024963] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.278 [2024-10-12 22:22:20.024970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.278 [2024-10-12 22:22:20.024977] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.278 [2024-10-12 22:22:20.025182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.278 [2024-10-12 22:22:20.025211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3717988 00:34:02.278 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:02.540 [2024-10-12 22:22:20.909798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.540 22:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:02.801 Malloc0 00:34:02.801 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:03.062 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.323 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.323 [2024-10-12 22:22:21.729812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.323 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.584 [2024-10-12 22:22:21.914318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3718354 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3718354 /var/tmp/bdevperf.sock 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3718354 ']' 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:03.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:03.584 22:22:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:04.527 22:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:04.527 22:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:04.527 22:22:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:04.527 22:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:05.099 Nvme0n1 00:34:05.099 22:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:05.360 Nvme0n1 00:34:05.360 22:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:05.360 22:22:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:07.271 22:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:07.271 22:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:07.531 22:22:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:07.792 22:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:08.733 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:08.733 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:08.733 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.733 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:08.993 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:09.254 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.254 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:09.254 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.254 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.515 22:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:09.774 22:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:09.774 22:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:09.774 22:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:10.034 22:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:10.293 22:22:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:11.233 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:11.233 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:11.233 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.233 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.493 22:22:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:11.753 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.753 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:11.753 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:11.753 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:12.013 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.013 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:12.013 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:12.013 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:12.272 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:12.531 22:22:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:12.790 22:22:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:13.728 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:13.728 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:13.728 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.728 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.988 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:14.249 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.249 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:14.249 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:14.249 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.510 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.510 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:14.510 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.510 22:22:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:14.770 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:15.030 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:15.291 22:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:16.232 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:16.232 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:16.232 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.232 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.492 22:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.753 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.753 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.753 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.753 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.014 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:17.274 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:17.274 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:17.274 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:17.535 22:22:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:17.795 22:22:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:18.739 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:18.739 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:18.739 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.739 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.001 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:19.263 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.263 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:19.263 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.263 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.523 22:22:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:19.783 22:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:19.783 22:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:19.783 22:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:20.043 22:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:20.043 22:22:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.425 22:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:21.685 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.685 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:21.685 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.685 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:21.945 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:22.206 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.206 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:22.467 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:22.467 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:22.467 22:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:22.728 22:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:23.669 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:23.669 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:23.669 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:23.669 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.929 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.929 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:23.929 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.929 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.190 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:24.450 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.450 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:24.450 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.450 22:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:24.711 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:24.971 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:25.232 22:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:26.173 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:26.173 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:26.173 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.173 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.433 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.433 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.433 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.433 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.693 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.693 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.693 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.693 22:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.693 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.693 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.693 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.693 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:26.954 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.954 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:26.954 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.954 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:27.216 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:27.476 22:22:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:27.737 22:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:28.678 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:28.678 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:28.678 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.678 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.938 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:29.199 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.199 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:29.199 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.200 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:29.460 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.460 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:29.460 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.460 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:29.719 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.719 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:29.719 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.719 22:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.719 22:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.719 22:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:29.719 22:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:29.978 22:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:30.238 22:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:31.178 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:31.178 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:31.178 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.178 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.439 22:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.699 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.699 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.699 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.699 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.961 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.961 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:31.961 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.961 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3718354 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3718354 ']' 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3718354 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.222 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3718354 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3718354' 00:34:32.509 killing process with pid 3718354 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3718354 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3718354 00:34:32.509 { 00:34:32.509 "results": [ 00:34:32.509 { 00:34:32.509 "job": "Nvme0n1", 00:34:32.509 "core_mask": "0x4", 00:34:32.509 "workload": "verify", 00:34:32.509 "status": "terminated", 00:34:32.509 "verify_range": { 00:34:32.509 "start": 0, 00:34:32.509 "length": 16384 00:34:32.509 }, 00:34:32.509 "queue_depth": 128, 00:34:32.509 "io_size": 4096, 00:34:32.509 "runtime": 26.94312, 00:34:32.509 "iops": 12043.03733197937, 00:34:32.509 "mibps": 47.04311457804442, 00:34:32.509 "io_failed": 0, 00:34:32.509 "io_timeout": 0, 00:34:32.509 "avg_latency_us": 10610.213873854438, 00:34:32.509 "min_latency_us": 686.08, 00:34:32.509 "max_latency_us": 3075822.933333333 00:34:32.509 } 00:34:32.509 ], 00:34:32.509 "core_count": 1 00:34:32.509 } 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3718354 00:34:32.509 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:32.509 [2024-10-12 22:22:21.994351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:32.509 [2024-10-12 22:22:21.994430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718354 ] 00:34:32.509 [2024-10-12 22:22:22.076265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.509 [2024-10-12 22:22:22.122284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.509 [2024-10-12 22:22:23.571748] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:34:32.509 Running I/O for 90 seconds... 00:34:32.509 10456.00 IOPS, 40.84 MiB/s [2024-10-12T20:22:50.998Z] 10788.50 IOPS, 42.14 MiB/s [2024-10-12T20:22:50.998Z] 10923.33 IOPS, 42.67 MiB/s [2024-10-12T20:22:50.998Z] 11350.00 IOPS, 44.34 MiB/s [2024-10-12T20:22:50.998Z] 11701.80 IOPS, 45.71 MiB/s [2024-10-12T20:22:50.998Z] 11937.50 IOPS, 46.63 MiB/s [2024-10-12T20:22:50.998Z] 12103.57 IOPS, 47.28 MiB/s [2024-10-12T20:22:50.998Z] 12229.38 IOPS, 47.77 MiB/s [2024-10-12T20:22:50.998Z] 12294.11 IOPS, 48.02 MiB/s [2024-10-12T20:22:50.998Z] 12341.80 IOPS, 48.21 MiB/s [2024-10-12T20:22:50.998Z] 12400.91 IOPS, 48.44 MiB/s [2024-10-12T20:22:50.998Z] 12449.75 IOPS, 48.63 MiB/s [2024-10-12T20:22:50.998Z] [2024-10-12 22:22:35.842377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.509 [2024-10-12 22:22:35.842406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.509 [2024-10-12 22:22:35.842423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.509 [2024-10-12 22:22:35.842429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.509 [2024-10-12 22:22:35.842440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.509 [2024-10-12 22:22:35.842445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.509 [2024-10-12 22:22:35.842456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.842931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.510 [2024-10-12 22:22:35.843311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.510 [2024-10-12 22:22:35.843316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.843985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.843990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.511 [2024-10-12 22:22:35.844163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.511 [2024-10-12 22:22:35.844173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.512 [2024-10-12 22:22:35.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.512 [2024-10-12 22:22:35.844431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.512 [2024-10-12 22:22:35.844740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.512 [2024-10-12 22:22:35.844752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.844861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.844866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.513 [2024-10-12 22:22:35.845299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.513 [2024-10-12 22:22:35.845760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.513 [2024-10-12 22:22:35.845770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.845867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.845877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.514 [2024-10-12 22:22:35.857575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.514 [2024-10-12 22:22:35.857580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.857824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.857987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.857993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.515 [2024-10-12 22:22:35.858069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.515 [2024-10-12 22:22:35.858147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.515 [2024-10-12 22:22:35.858152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.858493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.858501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.859230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.516 [2024-10-12 22:22:35.859252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.516 [2024-10-12 22:22:35.859583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.516 [2024-10-12 22:22:35.859597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.859988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.859995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.517 [2024-10-12 22:22:35.860881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.517 [2024-10-12 22:22:35.860888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.860908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.860922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.860928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.860942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.860948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.860964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.860971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.860984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.860991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.861200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.861215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.518 [2024-10-12 22:22:35.868884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.868917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.868945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.868975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.868993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.869003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.869021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.869031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.869050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.869059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.869078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.518 [2024-10-12 22:22:35.869087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.518 [2024-10-12 22:22:35.869114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.519 [2024-10-12 22:22:35.869350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.869662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.870986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.870995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.871014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.871023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.871041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.871050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.871069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.519 [2024-10-12 22:22:35.871079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.519 [2024-10-12 22:22:35.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.871975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.871984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.520 [2024-10-12 22:22:35.872202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.520 [2024-10-12 22:22:35.872211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.872230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.872239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.872259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.872268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.872287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.872296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.872315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.872324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.872343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.872352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.873978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.873997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.874024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.874052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.874080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.874114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.521 [2024-10-12 22:22:35.874147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.521 [2024-10-12 22:22:35.874157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.522 [2024-10-12 22:22:35.874184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.522 [2024-10-12 22:22:35.874212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.522 [2024-10-12 22:22:35.874239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.522 [2024-10-12 22:22:35.874269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.522 [2024-10-12 22:22:35.874733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.874984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.874993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.875981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.875992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.876011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.876024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.876043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.876052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.876071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.876080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.876099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.522 [2024-10-12 22:22:35.876115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.522 [2024-10-12 22:22:35.876134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.523 [2024-10-12 22:22:35.876397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.876980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.876998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.877027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.877056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.877084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.877115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.523 [2024-10-12 22:22:35.877143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.523 [2024-10-12 22:22:35.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.877618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.877627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.524 [2024-10-12 22:22:35.878902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.524 [2024-10-12 22:22:35.878917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.878925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.878940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.878948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.878963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.878970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.878987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.878994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.525 [2024-10-12 22:22:35.879578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.879770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.879777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.880456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.880470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.880489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.880497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.880512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.880520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.880535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.525 [2024-10-12 22:22:35.880557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.525 [2024-10-12 22:22:35.880564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.526 [2024-10-12 22:22:35.880900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.880922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.880945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.880967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.880981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.880989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.526 [2024-10-12 22:22:35.881476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.526 [2024-10-12 22:22:35.881484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.881854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.881862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.882982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.882997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.527 [2024-10-12 22:22:35.883143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.527 [2024-10-12 22:22:35.883159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.528 [2024-10-12 22:22:35.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.883916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.883923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.528 [2024-10-12 22:22:35.884801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.528 [2024-10-12 22:22:35.884808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.884985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.884999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.885007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.885029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.529 [2024-10-12 22:22:35.885051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.529 [2024-10-12 22:22:35.885832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.529 [2024-10-12 22:22:35.885847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.885980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.886982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.886997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.530 [2024-10-12 22:22:35.887416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.530 [2024-10-12 22:22:35.887423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.531 [2024-10-12 22:22:35.887922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.887981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.887988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.888983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.888998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.531 [2024-10-12 22:22:35.889162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.531 [2024-10-12 22:22:35.889169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.532 [2024-10-12 22:22:35.889191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.532 [2024-10-12 22:22:35.889213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.532 [2024-10-12 22:22:35.889235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.889992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.889997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.532 [2024-10-12 22:22:35.890571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.532 [2024-10-12 22:22:35.890581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.890986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.890996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.533 [2024-10-12 22:22:35.891160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.533 [2024-10-12 22:22:35.891365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.533 [2024-10-12 22:22:35.891375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.891411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.891984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.891990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.534 [2024-10-12 22:22:35.892319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.534 [2024-10-12 22:22:35.892747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.534 [2024-10-12 22:22:35.892758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.892854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.892864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.896994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.896999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.535 [2024-10-12 22:22:35.897517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.535 [2024-10-12 22:22:35.897527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.897777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.897987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.897992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.536 [2024-10-12 22:22:35.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.536 [2024-10-12 22:22:35.898353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.536 [2024-10-12 22:22:35.898363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.537 [2024-10-12 22:22:35.898477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.898564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.898569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.899583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.899588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:32.537 [2024-10-12 22:22:35.900744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.537 [2024-10-12 22:22:35.900749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.900991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.900996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.538 [2024-10-12 22:22:35.901510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.538 [2024-10-12 22:22:35.901818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:32.538 [2024-10-12 22:22:35.901831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.901873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.901985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.901999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:32.539 [2024-10-12 22:22:35.902417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.539 [2024-10-12 22:22:35.902933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:32.539 [2024-10-12 22:22:35.902949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:35.902954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:32.540 11554.69 IOPS, 45.14 MiB/s [2024-10-12T20:22:51.029Z] 10729.36 IOPS, 41.91 MiB/s [2024-10-12T20:22:51.029Z] 10014.07 IOPS, 39.12 MiB/s [2024-10-12T20:22:51.029Z] 10099.62 IOPS, 39.45 MiB/s [2024-10-12T20:22:51.029Z] 10266.65 IOPS, 40.10 MiB/s [2024-10-12T20:22:51.029Z] 10611.89 IOPS, 41.45 MiB/s [2024-10-12T20:22:51.029Z] 10968.74 IOPS, 42.85 MiB/s [2024-10-12T20:22:51.029Z] 11213.35 IOPS, 43.80 MiB/s [2024-10-12T20:22:51.029Z] 11300.52 IOPS, 44.14 MiB/s [2024-10-12T20:22:51.029Z] 11384.41 IOPS, 44.47 MiB/s [2024-10-12T20:22:51.029Z] 11583.61 IOPS, 45.25 MiB/s [2024-10-12T20:22:51.029Z] 11811.75 IOPS, 46.14 MiB/s [2024-10-12T20:22:51.029Z] [2024-10-12 22:22:48.497883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.497919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:32.540 [2024-10-12 22:22:48.500640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:32.540 [2024-10-12 22:22:48.500645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:32.540 11979.36 IOPS, 46.79 MiB/s [2024-10-12T20:22:51.029Z] 12007.92 IOPS, 46.91 MiB/s [2024-10-12T20:22:51.029Z] Received shutdown signal, test time was about 26.943730 seconds 00:34:32.540 00:34:32.540 Latency(us) 00:34:32.540 [2024-10-12T20:22:51.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.540 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:32.540 Verification LBA range: start 0x0 length 0x4000 00:34:32.540 Nvme0n1 : 26.94 12043.04 47.04 0.00 0.00 10610.21 686.08 3075822.93 00:34:32.540 [2024-10-12T20:22:51.029Z] =================================================================================================================== 00:34:32.540 [2024-10-12T20:22:51.029Z] Total : 12043.04 47.04 0.00 0.00 10610.21 686.08 3075822.93 00:34:32.540 [2024-10-12 22:22:50.752552] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:34:32.540 22:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.801 rmmod nvme_tcp 00:34:32.801 rmmod nvme_fabrics 00:34:32.801 rmmod nvme_keyring 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 3717988 ']' 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 3717988 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3717988 ']' 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3717988 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3717988 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3717988' 00:34:32.801 killing process with pid 3717988 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3717988 00:34:32.801 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3717988 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.061 22:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.972 00:34:34.972 real 0m41.329s 00:34:34.972 user 1m46.905s 00:34:34.972 sys 0m11.614s 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:34.972 ************************************ 00:34:34.972 END TEST nvmf_host_multipath_status 00:34:34.972 ************************************ 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.972 22:22:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.233 ************************************ 00:34:35.233 START TEST nvmf_discovery_remove_ifc 00:34:35.233 ************************************ 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:35.233 * Looking for test storage... 00:34:35.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:35.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.233 --rc genhtml_branch_coverage=1 00:34:35.233 --rc genhtml_function_coverage=1 00:34:35.233 --rc genhtml_legend=1 00:34:35.233 --rc geninfo_all_blocks=1 00:34:35.233 --rc geninfo_unexecuted_blocks=1 00:34:35.233 00:34:35.233 ' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:35.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.233 --rc genhtml_branch_coverage=1 00:34:35.233 --rc genhtml_function_coverage=1 00:34:35.233 --rc genhtml_legend=1 00:34:35.233 --rc geninfo_all_blocks=1 00:34:35.233 --rc geninfo_unexecuted_blocks=1 00:34:35.233 00:34:35.233 ' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:35.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.233 --rc genhtml_branch_coverage=1 00:34:35.233 --rc genhtml_function_coverage=1 00:34:35.233 --rc genhtml_legend=1 00:34:35.233 --rc geninfo_all_blocks=1 00:34:35.233 --rc geninfo_unexecuted_blocks=1 00:34:35.233 00:34:35.233 ' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:35.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.233 --rc genhtml_branch_coverage=1 00:34:35.233 --rc genhtml_function_coverage=1 00:34:35.233 --rc genhtml_legend=1 00:34:35.233 --rc geninfo_all_blocks=1 00:34:35.233 --rc geninfo_unexecuted_blocks=1 00:34:35.233 00:34:35.233 ' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.233 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:35.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.234 22:22:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.442 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:43.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:43.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:43.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:43.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # is_hw=yes 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.443 22:23:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:34:43.443 00:34:43.443 --- 10.0.0.2 ping statistics --- 00:34:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.443 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:34:43.443 00:34:43.443 --- 10.0.0.1 ping statistics --- 00:34:43.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.443 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # return 0 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=3728228 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 3728228 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3728228 ']' 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.443 22:23:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.443 [2024-10-12 22:23:01.292515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:43.444 [2024-10-12 22:23:01.292580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.444 [2024-10-12 22:23:01.381835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.444 [2024-10-12 22:23:01.428815] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.444 [2024-10-12 22:23:01.428867] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.444 [2024-10-12 22:23:01.428876] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.444 [2024-10-12 22:23:01.428883] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.444 [2024-10-12 22:23:01.428889] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.444 [2024-10-12 22:23:01.428912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.728 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.728 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:43.728 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:34:43.728 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.728 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.729 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.729 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:43.729 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.729 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.729 [2024-10-12 22:23:02.174143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.729 [2024-10-12 22:23:02.182376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:43.729 null0 00:34:43.729 [2024-10-12 22:23:02.214359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3728424 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3728424 /tmp/host.sock 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3728424 ']' 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:43.989 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:43.990 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:43.990 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:43.990 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:43.990 22:23:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:43.990 [2024-10-12 22:23:02.290733] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:43.990 [2024-10-12 22:23:02.290798] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728424 ] 00:34:43.990 [2024-10-12 22:23:02.373573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.990 [2024-10-12 22:23:02.421013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.933 22:23:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:45.875 [2024-10-12 22:23:04.253067] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:45.875 [2024-10-12 22:23:04.253091] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:45.875 [2024-10-12 22:23:04.253108] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:46.136 [2024-10-12 22:23:04.382538] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:46.136 [2024-10-12 22:23:04.443709] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:46.136 [2024-10-12 22:23:04.443755] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:46.136 [2024-10-12 22:23:04.443777] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:46.136 [2024-10-12 22:23:04.443791] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:46.136 [2024-10-12 22:23:04.443812] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:46.136 [2024-10-12 22:23:04.452128] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbb64d0 was disconnected and freed. delete nvme_qpair. 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:46.136 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:46.398 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.398 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:46.398 22:23:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:47.340 22:23:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:48.284 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.544 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:48.544 22:23:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:49.495 22:23:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:50.436 22:23:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:51.821 [2024-10-12 22:23:09.884441] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:51.821 [2024-10-12 22:23:09.884475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.821 [2024-10-12 22:23:09.884484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.821 [2024-10-12 22:23:09.884491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.821 [2024-10-12 22:23:09.884497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.821 [2024-10-12 22:23:09.884503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.821 [2024-10-12 22:23:09.884508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.821 [2024-10-12 22:23:09.884513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.821 [2024-10-12 22:23:09.884518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.821 [2024-10-12 22:23:09.884524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:51.821 [2024-10-12 22:23:09.884529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.821 [2024-10-12 22:23:09.884535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92d80 is same with the state(6) to be set 00:34:51.821 [2024-10-12 22:23:09.894462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb92d80 (9): Bad file descriptor 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:51.821 22:23:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:51.821 [2024-10-12 22:23:09.904498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:52.430 [2024-10-12 22:23:10.915209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:52.430 [2024-10-12 22:23:10.915312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb92d80 with addr=10.0.0.2, port=4420 00:34:52.430 [2024-10-12 22:23:10.915361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92d80 is same with the state(6) to be set 00:34:52.430 [2024-10-12 22:23:10.915426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb92d80 (9): Bad file descriptor 00:34:52.430 [2024-10-12 22:23:10.915579] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:52.430 [2024-10-12 22:23:10.915640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:52.430 [2024-10-12 22:23:10.915663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:52.430 [2024-10-12 22:23:10.915687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:52.430 [2024-10-12 22:23:10.915733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:52.430 [2024-10-12 22:23:10.915755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:52.691 22:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.691 22:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:52.691 22:23:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:53.633 [2024-10-12 22:23:11.918161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:53.633 [2024-10-12 22:23:11.918180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:53.633 [2024-10-12 22:23:11.918186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:53.633 [2024-10-12 22:23:11.918192] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:53.633 [2024-10-12 22:23:11.918202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:53.633 [2024-10-12 22:23:11.918220] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:53.633 [2024-10-12 22:23:11.918238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.633 [2024-10-12 22:23:11.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.633 [2024-10-12 22:23:11.918256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.633 [2024-10-12 22:23:11.918261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.633 [2024-10-12 22:23:11.918267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.633 [2024-10-12 22:23:11.918272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.633 [2024-10-12 22:23:11.918278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.633 [2024-10-12 22:23:11.918283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.633 [2024-10-12 22:23:11.918288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.633 [2024-10-12 22:23:11.918293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.634 [2024-10-12 22:23:11.918299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:53.634 [2024-10-12 22:23:11.918521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb824c0 (9): Bad file descriptor 00:34:53.634 [2024-10-12 22:23:11.919533] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:53.634 [2024-10-12 22:23:11.919541] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:53.634 22:23:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:53.634 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.894 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:53.894 22:23:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:54.835 22:23:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:55.777 [2024-10-12 22:23:13.978344] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:55.777 [2024-10-12 22:23:13.978364] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:55.777 [2024-10-12 22:23:13.978374] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:55.777 [2024-10-12 22:23:14.066624] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.777 [2024-10-12 22:23:14.250358] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:55.777 [2024-10-12 22:23:14.250391] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:55.777 [2024-10-12 22:23:14.250407] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:55.777 [2024-10-12 22:23:14.250418] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:55.777 [2024-10-12 22:23:14.250424] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:55.777 22:23:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:55.777 [2024-10-12 22:23:14.256255] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb8e760 was disconnected and freed. delete nvme_qpair. 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3728424 ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3728424' 00:34:57.162 killing process with pid 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3728424 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.162 rmmod nvme_tcp 00:34:57.162 rmmod nvme_fabrics 00:34:57.162 rmmod nvme_keyring 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 3728228 ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 3728228 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3728228 ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3728228 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3728228 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3728228' 00:34:57.162 killing process with pid 3728228 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3728228 00:34:57.162 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3728228 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:57.423 22:23:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.334 22:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:59.334 00:34:59.334 real 0m24.339s 00:34:59.334 user 0m29.268s 00:34:59.334 sys 0m7.195s 00:34:59.334 22:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:59.334 22:23:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.334 ************************************ 00:34:59.334 END TEST nvmf_discovery_remove_ifc 00:34:59.334 ************************************ 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.595 ************************************ 00:34:59.595 START TEST nvmf_identify_kernel_target 00:34:59.595 ************************************ 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:59.595 * Looking for test storage... 00:34:59.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:34:59.595 22:23:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:59.595 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.856 --rc genhtml_branch_coverage=1 00:34:59.856 --rc genhtml_function_coverage=1 00:34:59.856 --rc genhtml_legend=1 00:34:59.856 --rc geninfo_all_blocks=1 00:34:59.856 --rc geninfo_unexecuted_blocks=1 00:34:59.856 00:34:59.856 ' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.856 --rc genhtml_branch_coverage=1 00:34:59.856 --rc genhtml_function_coverage=1 00:34:59.856 --rc genhtml_legend=1 00:34:59.856 --rc geninfo_all_blocks=1 00:34:59.856 --rc geninfo_unexecuted_blocks=1 00:34:59.856 00:34:59.856 ' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.856 --rc genhtml_branch_coverage=1 00:34:59.856 --rc genhtml_function_coverage=1 00:34:59.856 --rc genhtml_legend=1 00:34:59.856 --rc geninfo_all_blocks=1 00:34:59.856 --rc geninfo_unexecuted_blocks=1 00:34:59.856 00:34:59.856 ' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:59.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:59.856 --rc genhtml_branch_coverage=1 00:34:59.856 --rc genhtml_function_coverage=1 00:34:59.856 --rc genhtml_legend=1 00:34:59.856 --rc geninfo_all_blocks=1 00:34:59.856 --rc geninfo_unexecuted_blocks=1 00:34:59.856 00:34:59.856 ' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:59.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:59.856 22:23:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:07.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:07.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:07.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:07.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # is_hw=yes 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:07.995 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:07.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:07.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:35:07.996 00:35:07.996 --- 10.0.0.2 ping statistics --- 00:35:07.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.996 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:07.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:07.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:35:07.996 00:35:07.996 --- 10.0.0.1 ping statistics --- 00:35:07.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:07.996 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # return 0 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:07.996 22:23:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:11.294 Waiting for block devices as requested 00:35:11.294 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:11.294 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:11.554 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:11.554 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:11.815 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:11.815 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:11.815 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:12.075 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:12.075 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:12.075 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:12.335 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:12.597 No valid GPT data, bailing 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:12.597 22:23:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:35:12.597 00:35:12.597 Discovery Log Number of Records 2, Generation counter 2 00:35:12.597 =====Discovery Log Entry 0====== 00:35:12.597 trtype: tcp 00:35:12.597 adrfam: ipv4 00:35:12.597 subtype: current discovery subsystem 00:35:12.597 treq: not specified, sq flow control disable supported 00:35:12.597 portid: 1 00:35:12.597 trsvcid: 4420 00:35:12.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:12.597 traddr: 10.0.0.1 00:35:12.597 eflags: none 00:35:12.597 sectype: none 00:35:12.597 =====Discovery Log Entry 1====== 00:35:12.597 trtype: tcp 00:35:12.597 adrfam: ipv4 00:35:12.597 subtype: nvme subsystem 00:35:12.597 treq: not specified, sq flow control disable supported 00:35:12.597 portid: 1 00:35:12.597 trsvcid: 4420 00:35:12.597 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:12.597 traddr: 10.0.0.1 00:35:12.597 eflags: none 00:35:12.597 sectype: none 00:35:12.597 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:12.597 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:12.859 ===================================================== 00:35:12.859 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:12.859 ===================================================== 00:35:12.859 Controller Capabilities/Features 00:35:12.859 ================================ 00:35:12.859 Vendor ID: 0000 00:35:12.859 Subsystem Vendor ID: 0000 00:35:12.859 Serial Number: e7be63c929235db7bda2 00:35:12.859 Model Number: Linux 00:35:12.859 Firmware Version: 6.8.9-20 00:35:12.859 Recommended Arb Burst: 0 00:35:12.859 IEEE OUI Identifier: 00 00 00 00:35:12.859 Multi-path I/O 00:35:12.859 May have multiple subsystem ports: No 00:35:12.859 May have multiple controllers: No 00:35:12.859 Associated with SR-IOV VF: No 00:35:12.859 Max Data Transfer Size: Unlimited 00:35:12.859 Max Number of Namespaces: 0 00:35:12.859 Max Number of I/O Queues: 1024 00:35:12.859 NVMe Specification Version (VS): 1.3 00:35:12.859 NVMe Specification Version (Identify): 1.3 00:35:12.859 Maximum Queue Entries: 1024 00:35:12.859 Contiguous Queues Required: No 00:35:12.859 Arbitration Mechanisms Supported 00:35:12.859 Weighted Round Robin: Not Supported 00:35:12.859 Vendor Specific: Not Supported 00:35:12.859 Reset Timeout: 7500 ms 00:35:12.859 Doorbell Stride: 4 bytes 00:35:12.859 NVM Subsystem Reset: Not Supported 00:35:12.859 Command Sets Supported 00:35:12.859 NVM Command Set: Supported 00:35:12.859 Boot Partition: Not Supported 00:35:12.859 Memory Page Size Minimum: 4096 bytes 00:35:12.859 Memory Page Size Maximum: 4096 bytes 00:35:12.859 Persistent Memory Region: Not Supported 00:35:12.859 Optional Asynchronous Events Supported 00:35:12.859 Namespace Attribute Notices: Not Supported 00:35:12.859 Firmware Activation Notices: Not Supported 00:35:12.859 ANA Change Notices: Not Supported 00:35:12.859 PLE Aggregate Log Change Notices: Not Supported 00:35:12.859 LBA Status Info Alert Notices: Not Supported 00:35:12.859 EGE Aggregate Log Change Notices: Not Supported 00:35:12.859 Normal NVM Subsystem Shutdown event: Not Supported 00:35:12.859 Zone Descriptor Change Notices: Not Supported 00:35:12.859 Discovery Log Change Notices: Supported 00:35:12.859 Controller Attributes 00:35:12.859 128-bit Host Identifier: Not Supported 00:35:12.859 Non-Operational Permissive Mode: Not Supported 00:35:12.859 NVM Sets: Not Supported 00:35:12.859 Read Recovery Levels: Not Supported 00:35:12.859 Endurance Groups: Not Supported 00:35:12.859 Predictable Latency Mode: Not Supported 00:35:12.859 Traffic Based Keep ALive: Not Supported 00:35:12.859 Namespace Granularity: Not Supported 00:35:12.859 SQ Associations: Not Supported 00:35:12.859 UUID List: Not Supported 00:35:12.859 Multi-Domain Subsystem: Not Supported 00:35:12.859 Fixed Capacity Management: Not Supported 00:35:12.859 Variable Capacity Management: Not Supported 00:35:12.859 Delete Endurance Group: Not Supported 00:35:12.859 Delete NVM Set: Not Supported 00:35:12.859 Extended LBA Formats Supported: Not Supported 00:35:12.859 Flexible Data Placement Supported: Not Supported 00:35:12.859 00:35:12.859 Controller Memory Buffer Support 00:35:12.859 ================================ 00:35:12.859 Supported: No 00:35:12.859 00:35:12.859 Persistent Memory Region Support 00:35:12.859 ================================ 00:35:12.859 Supported: No 00:35:12.859 00:35:12.859 Admin Command Set Attributes 00:35:12.859 ============================ 00:35:12.859 Security Send/Receive: Not Supported 00:35:12.859 Format NVM: Not Supported 00:35:12.859 Firmware Activate/Download: Not Supported 00:35:12.859 Namespace Management: Not Supported 00:35:12.859 Device Self-Test: Not Supported 00:35:12.859 Directives: Not Supported 00:35:12.859 NVMe-MI: Not Supported 00:35:12.859 Virtualization Management: Not Supported 00:35:12.859 Doorbell Buffer Config: Not Supported 00:35:12.859 Get LBA Status Capability: Not Supported 00:35:12.859 Command & Feature Lockdown Capability: Not Supported 00:35:12.859 Abort Command Limit: 1 00:35:12.859 Async Event Request Limit: 1 00:35:12.859 Number of Firmware Slots: N/A 00:35:12.859 Firmware Slot 1 Read-Only: N/A 00:35:12.859 Firmware Activation Without Reset: N/A 00:35:12.859 Multiple Update Detection Support: N/A 00:35:12.859 Firmware Update Granularity: No Information Provided 00:35:12.859 Per-Namespace SMART Log: No 00:35:12.859 Asymmetric Namespace Access Log Page: Not Supported 00:35:12.859 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:12.859 Command Effects Log Page: Not Supported 00:35:12.859 Get Log Page Extended Data: Supported 00:35:12.859 Telemetry Log Pages: Not Supported 00:35:12.859 Persistent Event Log Pages: Not Supported 00:35:12.859 Supported Log Pages Log Page: May Support 00:35:12.859 Commands Supported & Effects Log Page: Not Supported 00:35:12.859 Feature Identifiers & Effects Log Page:May Support 00:35:12.859 NVMe-MI Commands & Effects Log Page: May Support 00:35:12.859 Data Area 4 for Telemetry Log: Not Supported 00:35:12.859 Error Log Page Entries Supported: 1 00:35:12.859 Keep Alive: Not Supported 00:35:12.859 00:35:12.859 NVM Command Set Attributes 00:35:12.859 ========================== 00:35:12.859 Submission Queue Entry Size 00:35:12.859 Max: 1 00:35:12.859 Min: 1 00:35:12.859 Completion Queue Entry Size 00:35:12.859 Max: 1 00:35:12.859 Min: 1 00:35:12.859 Number of Namespaces: 0 00:35:12.859 Compare Command: Not Supported 00:35:12.859 Write Uncorrectable Command: Not Supported 00:35:12.859 Dataset Management Command: Not Supported 00:35:12.859 Write Zeroes Command: Not Supported 00:35:12.859 Set Features Save Field: Not Supported 00:35:12.859 Reservations: Not Supported 00:35:12.859 Timestamp: Not Supported 00:35:12.859 Copy: Not Supported 00:35:12.859 Volatile Write Cache: Not Present 00:35:12.859 Atomic Write Unit (Normal): 1 00:35:12.859 Atomic Write Unit (PFail): 1 00:35:12.859 Atomic Compare & Write Unit: 1 00:35:12.859 Fused Compare & Write: Not Supported 00:35:12.859 Scatter-Gather List 00:35:12.859 SGL Command Set: Supported 00:35:12.859 SGL Keyed: Not Supported 00:35:12.860 SGL Bit Bucket Descriptor: Not Supported 00:35:12.860 SGL Metadata Pointer: Not Supported 00:35:12.860 Oversized SGL: Not Supported 00:35:12.860 SGL Metadata Address: Not Supported 00:35:12.860 SGL Offset: Supported 00:35:12.860 Transport SGL Data Block: Not Supported 00:35:12.860 Replay Protected Memory Block: Not Supported 00:35:12.860 00:35:12.860 Firmware Slot Information 00:35:12.860 ========================= 00:35:12.860 Active slot: 0 00:35:12.860 00:35:12.860 00:35:12.860 Error Log 00:35:12.860 ========= 00:35:12.860 00:35:12.860 Active Namespaces 00:35:12.860 ================= 00:35:12.860 Discovery Log Page 00:35:12.860 ================== 00:35:12.860 Generation Counter: 2 00:35:12.860 Number of Records: 2 00:35:12.860 Record Format: 0 00:35:12.860 00:35:12.860 Discovery Log Entry 0 00:35:12.860 ---------------------- 00:35:12.860 Transport Type: 3 (TCP) 00:35:12.860 Address Family: 1 (IPv4) 00:35:12.860 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:12.860 Entry Flags: 00:35:12.860 Duplicate Returned Information: 0 00:35:12.860 Explicit Persistent Connection Support for Discovery: 0 00:35:12.860 Transport Requirements: 00:35:12.860 Secure Channel: Not Specified 00:35:12.860 Port ID: 1 (0x0001) 00:35:12.860 Controller ID: 65535 (0xffff) 00:35:12.860 Admin Max SQ Size: 32 00:35:12.860 Transport Service Identifier: 4420 00:35:12.860 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:12.860 Transport Address: 10.0.0.1 00:35:12.860 Discovery Log Entry 1 00:35:12.860 ---------------------- 00:35:12.860 Transport Type: 3 (TCP) 00:35:12.860 Address Family: 1 (IPv4) 00:35:12.860 Subsystem Type: 2 (NVM Subsystem) 00:35:12.860 Entry Flags: 00:35:12.860 Duplicate Returned Information: 0 00:35:12.860 Explicit Persistent Connection Support for Discovery: 0 00:35:12.860 Transport Requirements: 00:35:12.860 Secure Channel: Not Specified 00:35:12.860 Port ID: 1 (0x0001) 00:35:12.860 Controller ID: 65535 (0xffff) 00:35:12.860 Admin Max SQ Size: 32 00:35:12.860 Transport Service Identifier: 4420 00:35:12.860 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:12.860 Transport Address: 10.0.0.1 00:35:12.860 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.860 get_feature(0x01) failed 00:35:12.860 get_feature(0x02) failed 00:35:12.860 get_feature(0x04) failed 00:35:12.860 ===================================================== 00:35:12.860 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.860 ===================================================== 00:35:12.860 Controller Capabilities/Features 00:35:12.860 ================================ 00:35:12.860 Vendor ID: 0000 00:35:12.860 Subsystem Vendor ID: 0000 00:35:12.860 Serial Number: 06cdd055df4808e63b32 00:35:12.860 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:12.860 Firmware Version: 6.8.9-20 00:35:12.860 Recommended Arb Burst: 6 00:35:12.860 IEEE OUI Identifier: 00 00 00 00:35:12.860 Multi-path I/O 00:35:12.860 May have multiple subsystem ports: Yes 00:35:12.860 May have multiple controllers: Yes 00:35:12.860 Associated with SR-IOV VF: No 00:35:12.860 Max Data Transfer Size: Unlimited 00:35:12.860 Max Number of Namespaces: 1024 00:35:12.860 Max Number of I/O Queues: 128 00:35:12.860 NVMe Specification Version (VS): 1.3 00:35:12.860 NVMe Specification Version (Identify): 1.3 00:35:12.860 Maximum Queue Entries: 1024 00:35:12.860 Contiguous Queues Required: No 00:35:12.860 Arbitration Mechanisms Supported 00:35:12.860 Weighted Round Robin: Not Supported 00:35:12.860 Vendor Specific: Not Supported 00:35:12.860 Reset Timeout: 7500 ms 00:35:12.860 Doorbell Stride: 4 bytes 00:35:12.860 NVM Subsystem Reset: Not Supported 00:35:12.860 Command Sets Supported 00:35:12.860 NVM Command Set: Supported 00:35:12.860 Boot Partition: Not Supported 00:35:12.860 Memory Page Size Minimum: 4096 bytes 00:35:12.860 Memory Page Size Maximum: 4096 bytes 00:35:12.860 Persistent Memory Region: Not Supported 00:35:12.860 Optional Asynchronous Events Supported 00:35:12.860 Namespace Attribute Notices: Supported 00:35:12.860 Firmware Activation Notices: Not Supported 00:35:12.860 ANA Change Notices: Supported 00:35:12.860 PLE Aggregate Log Change Notices: Not Supported 00:35:12.860 LBA Status Info Alert Notices: Not Supported 00:35:12.860 EGE Aggregate Log Change Notices: Not Supported 00:35:12.860 Normal NVM Subsystem Shutdown event: Not Supported 00:35:12.860 Zone Descriptor Change Notices: Not Supported 00:35:12.860 Discovery Log Change Notices: Not Supported 00:35:12.860 Controller Attributes 00:35:12.860 128-bit Host Identifier: Supported 00:35:12.860 Non-Operational Permissive Mode: Not Supported 00:35:12.860 NVM Sets: Not Supported 00:35:12.860 Read Recovery Levels: Not Supported 00:35:12.860 Endurance Groups: Not Supported 00:35:12.860 Predictable Latency Mode: Not Supported 00:35:12.860 Traffic Based Keep ALive: Supported 00:35:12.860 Namespace Granularity: Not Supported 00:35:12.860 SQ Associations: Not Supported 00:35:12.860 UUID List: Not Supported 00:35:12.860 Multi-Domain Subsystem: Not Supported 00:35:12.860 Fixed Capacity Management: Not Supported 00:35:12.860 Variable Capacity Management: Not Supported 00:35:12.860 Delete Endurance Group: Not Supported 00:35:12.860 Delete NVM Set: Not Supported 00:35:12.860 Extended LBA Formats Supported: Not Supported 00:35:12.860 Flexible Data Placement Supported: Not Supported 00:35:12.860 00:35:12.860 Controller Memory Buffer Support 00:35:12.860 ================================ 00:35:12.860 Supported: No 00:35:12.860 00:35:12.860 Persistent Memory Region Support 00:35:12.860 ================================ 00:35:12.860 Supported: No 00:35:12.860 00:35:12.860 Admin Command Set Attributes 00:35:12.860 ============================ 00:35:12.860 Security Send/Receive: Not Supported 00:35:12.860 Format NVM: Not Supported 00:35:12.860 Firmware Activate/Download: Not Supported 00:35:12.860 Namespace Management: Not Supported 00:35:12.860 Device Self-Test: Not Supported 00:35:12.860 Directives: Not Supported 00:35:12.860 NVMe-MI: Not Supported 00:35:12.860 Virtualization Management: Not Supported 00:35:12.860 Doorbell Buffer Config: Not Supported 00:35:12.860 Get LBA Status Capability: Not Supported 00:35:12.860 Command & Feature Lockdown Capability: Not Supported 00:35:12.860 Abort Command Limit: 4 00:35:12.860 Async Event Request Limit: 4 00:35:12.860 Number of Firmware Slots: N/A 00:35:12.860 Firmware Slot 1 Read-Only: N/A 00:35:12.860 Firmware Activation Without Reset: N/A 00:35:12.860 Multiple Update Detection Support: N/A 00:35:12.860 Firmware Update Granularity: No Information Provided 00:35:12.860 Per-Namespace SMART Log: Yes 00:35:12.860 Asymmetric Namespace Access Log Page: Supported 00:35:12.860 ANA Transition Time : 10 sec 00:35:12.860 00:35:12.860 Asymmetric Namespace Access Capabilities 00:35:12.860 ANA Optimized State : Supported 00:35:12.860 ANA Non-Optimized State : Supported 00:35:12.860 ANA Inaccessible State : Supported 00:35:12.860 ANA Persistent Loss State : Supported 00:35:12.860 ANA Change State : Supported 00:35:12.860 ANAGRPID is not changed : No 00:35:12.860 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:12.860 00:35:12.860 ANA Group Identifier Maximum : 128 00:35:12.860 Number of ANA Group Identifiers : 128 00:35:12.860 Max Number of Allowed Namespaces : 1024 00:35:12.860 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:12.860 Command Effects Log Page: Supported 00:35:12.860 Get Log Page Extended Data: Supported 00:35:12.860 Telemetry Log Pages: Not Supported 00:35:12.860 Persistent Event Log Pages: Not Supported 00:35:12.860 Supported Log Pages Log Page: May Support 00:35:12.860 Commands Supported & Effects Log Page: Not Supported 00:35:12.860 Feature Identifiers & Effects Log Page:May Support 00:35:12.860 NVMe-MI Commands & Effects Log Page: May Support 00:35:12.860 Data Area 4 for Telemetry Log: Not Supported 00:35:12.860 Error Log Page Entries Supported: 128 00:35:12.860 Keep Alive: Supported 00:35:12.860 Keep Alive Granularity: 1000 ms 00:35:12.860 00:35:12.860 NVM Command Set Attributes 00:35:12.860 ========================== 00:35:12.860 Submission Queue Entry Size 00:35:12.860 Max: 64 00:35:12.860 Min: 64 00:35:12.860 Completion Queue Entry Size 00:35:12.860 Max: 16 00:35:12.860 Min: 16 00:35:12.860 Number of Namespaces: 1024 00:35:12.860 Compare Command: Not Supported 00:35:12.860 Write Uncorrectable Command: Not Supported 00:35:12.860 Dataset Management Command: Supported 00:35:12.860 Write Zeroes Command: Supported 00:35:12.860 Set Features Save Field: Not Supported 00:35:12.860 Reservations: Not Supported 00:35:12.860 Timestamp: Not Supported 00:35:12.860 Copy: Not Supported 00:35:12.861 Volatile Write Cache: Present 00:35:12.861 Atomic Write Unit (Normal): 1 00:35:12.861 Atomic Write Unit (PFail): 1 00:35:12.861 Atomic Compare & Write Unit: 1 00:35:12.861 Fused Compare & Write: Not Supported 00:35:12.861 Scatter-Gather List 00:35:12.861 SGL Command Set: Supported 00:35:12.861 SGL Keyed: Not Supported 00:35:12.861 SGL Bit Bucket Descriptor: Not Supported 00:35:12.861 SGL Metadata Pointer: Not Supported 00:35:12.861 Oversized SGL: Not Supported 00:35:12.861 SGL Metadata Address: Not Supported 00:35:12.861 SGL Offset: Supported 00:35:12.861 Transport SGL Data Block: Not Supported 00:35:12.861 Replay Protected Memory Block: Not Supported 00:35:12.861 00:35:12.861 Firmware Slot Information 00:35:12.861 ========================= 00:35:12.861 Active slot: 0 00:35:12.861 00:35:12.861 Asymmetric Namespace Access 00:35:12.861 =========================== 00:35:12.861 Change Count : 0 00:35:12.861 Number of ANA Group Descriptors : 1 00:35:12.861 ANA Group Descriptor : 0 00:35:12.861 ANA Group ID : 1 00:35:12.861 Number of NSID Values : 1 00:35:12.861 Change Count : 0 00:35:12.861 ANA State : 1 00:35:12.861 Namespace Identifier : 1 00:35:12.861 00:35:12.861 Commands Supported and Effects 00:35:12.861 ============================== 00:35:12.861 Admin Commands 00:35:12.861 -------------- 00:35:12.861 Get Log Page (02h): Supported 00:35:12.861 Identify (06h): Supported 00:35:12.861 Abort (08h): Supported 00:35:12.861 Set Features (09h): Supported 00:35:12.861 Get Features (0Ah): Supported 00:35:12.861 Asynchronous Event Request (0Ch): Supported 00:35:12.861 Keep Alive (18h): Supported 00:35:12.861 I/O Commands 00:35:12.861 ------------ 00:35:12.861 Flush (00h): Supported 00:35:12.861 Write (01h): Supported LBA-Change 00:35:12.861 Read (02h): Supported 00:35:12.861 Write Zeroes (08h): Supported LBA-Change 00:35:12.861 Dataset Management (09h): Supported 00:35:12.861 00:35:12.861 Error Log 00:35:12.861 ========= 00:35:12.861 Entry: 0 00:35:12.861 Error Count: 0x3 00:35:12.861 Submission Queue Id: 0x0 00:35:12.861 Command Id: 0x5 00:35:12.861 Phase Bit: 0 00:35:12.861 Status Code: 0x2 00:35:12.861 Status Code Type: 0x0 00:35:12.861 Do Not Retry: 1 00:35:12.861 Error Location: 0x28 00:35:12.861 LBA: 0x0 00:35:12.861 Namespace: 0x0 00:35:12.861 Vendor Log Page: 0x0 00:35:12.861 ----------- 00:35:12.861 Entry: 1 00:35:12.861 Error Count: 0x2 00:35:12.861 Submission Queue Id: 0x0 00:35:12.861 Command Id: 0x5 00:35:12.861 Phase Bit: 0 00:35:12.861 Status Code: 0x2 00:35:12.861 Status Code Type: 0x0 00:35:12.861 Do Not Retry: 1 00:35:12.861 Error Location: 0x28 00:35:12.861 LBA: 0x0 00:35:12.861 Namespace: 0x0 00:35:12.861 Vendor Log Page: 0x0 00:35:12.861 ----------- 00:35:12.861 Entry: 2 00:35:12.861 Error Count: 0x1 00:35:12.861 Submission Queue Id: 0x0 00:35:12.861 Command Id: 0x4 00:35:12.861 Phase Bit: 0 00:35:12.861 Status Code: 0x2 00:35:12.861 Status Code Type: 0x0 00:35:12.861 Do Not Retry: 1 00:35:12.861 Error Location: 0x28 00:35:12.861 LBA: 0x0 00:35:12.861 Namespace: 0x0 00:35:12.861 Vendor Log Page: 0x0 00:35:12.861 00:35:12.861 Number of Queues 00:35:12.861 ================ 00:35:12.861 Number of I/O Submission Queues: 128 00:35:12.861 Number of I/O Completion Queues: 128 00:35:12.861 00:35:12.861 ZNS Specific Controller Data 00:35:12.861 ============================ 00:35:12.861 Zone Append Size Limit: 0 00:35:12.861 00:35:12.861 00:35:12.861 Active Namespaces 00:35:12.861 ================= 00:35:12.861 get_feature(0x05) failed 00:35:12.861 Namespace ID:1 00:35:12.861 Command Set Identifier: NVM (00h) 00:35:12.861 Deallocate: Supported 00:35:12.861 Deallocated/Unwritten Error: Not Supported 00:35:12.861 Deallocated Read Value: Unknown 00:35:12.861 Deallocate in Write Zeroes: Not Supported 00:35:12.861 Deallocated Guard Field: 0xFFFF 00:35:12.861 Flush: Supported 00:35:12.861 Reservation: Not Supported 00:35:12.861 Namespace Sharing Capabilities: Multiple Controllers 00:35:12.861 Size (in LBAs): 3750748848 (1788GiB) 00:35:12.861 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:12.861 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:12.861 UUID: 73a80da1-381a-47b4-862e-b872e1e806e7 00:35:12.861 Thin Provisioning: Not Supported 00:35:12.861 Per-NS Atomic Units: Yes 00:35:12.861 Atomic Write Unit (Normal): 8 00:35:12.861 Atomic Write Unit (PFail): 8 00:35:12.861 Preferred Write Granularity: 8 00:35:12.861 Atomic Compare & Write Unit: 8 00:35:12.861 Atomic Boundary Size (Normal): 0 00:35:12.861 Atomic Boundary Size (PFail): 0 00:35:12.861 Atomic Boundary Offset: 0 00:35:12.861 NGUID/EUI64 Never Reused: No 00:35:12.861 ANA group ID: 1 00:35:12.861 Namespace Write Protected: No 00:35:12.861 Number of LBA Formats: 1 00:35:12.861 Current LBA Format: LBA Format #00 00:35:12.861 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:12.861 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.861 rmmod nvme_tcp 00:35:12.861 rmmod nvme_fabrics 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.861 22:23:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:35:15.405 22:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:18.702 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:18.702 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:19.274 00:35:19.274 real 0m19.645s 00:35:19.274 user 0m5.293s 00:35:19.274 sys 0m11.365s 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.274 ************************************ 00:35:19.274 END TEST nvmf_identify_kernel_target 00:35:19.274 ************************************ 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.274 ************************************ 00:35:19.274 START TEST nvmf_auth_host 00:35:19.274 ************************************ 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:19.274 * Looking for test storage... 00:35:19.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:35:19.274 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.536 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:19.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.537 --rc genhtml_branch_coverage=1 00:35:19.537 --rc genhtml_function_coverage=1 00:35:19.537 --rc genhtml_legend=1 00:35:19.537 --rc geninfo_all_blocks=1 00:35:19.537 --rc geninfo_unexecuted_blocks=1 00:35:19.537 00:35:19.537 ' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:19.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.537 --rc genhtml_branch_coverage=1 00:35:19.537 --rc genhtml_function_coverage=1 00:35:19.537 --rc genhtml_legend=1 00:35:19.537 --rc geninfo_all_blocks=1 00:35:19.537 --rc geninfo_unexecuted_blocks=1 00:35:19.537 00:35:19.537 ' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:19.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.537 --rc genhtml_branch_coverage=1 00:35:19.537 --rc genhtml_function_coverage=1 00:35:19.537 --rc genhtml_legend=1 00:35:19.537 --rc geninfo_all_blocks=1 00:35:19.537 --rc geninfo_unexecuted_blocks=1 00:35:19.537 00:35:19.537 ' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:19.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.537 --rc genhtml_branch_coverage=1 00:35:19.537 --rc genhtml_function_coverage=1 00:35:19.537 --rc genhtml_legend=1 00:35:19.537 --rc geninfo_all_blocks=1 00:35:19.537 --rc geninfo_unexecuted_blocks=1 00:35:19.537 00:35:19.537 ' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:19.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.537 22:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:35:27.676 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.677 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # is_hw=yes 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.677 22:23:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:35:27.677 00:35:27.677 --- 10.0.0.2 ping statistics --- 00:35:27.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.677 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:35:27.677 00:35:27.677 --- 10.0.0.1 ping statistics --- 00:35:27.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.677 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # return 0 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=3742760 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 3742760 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3742760 ']' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.677 22:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4c2d33dc16078dc71386030bf9c2ba36 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NFn 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4c2d33dc16078dc71386030bf9c2ba36 0 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4c2d33dc16078dc71386030bf9c2ba36 0 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4c2d33dc16078dc71386030bf9c2ba36 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:27.677 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NFn 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NFn 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NFn 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=82dfd6c26bde074c6b6555e3f9651f4afba426c23d269f372589a180cfbde1ed 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.biC 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 82dfd6c26bde074c6b6555e3f9651f4afba426c23d269f372589a180cfbde1ed 3 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 82dfd6c26bde074c6b6555e3f9651f4afba426c23d269f372589a180cfbde1ed 3 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=82dfd6c26bde074c6b6555e3f9651f4afba426c23d269f372589a180cfbde1ed 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.biC 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.biC 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.biC 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b812ee7ccdd5848120facd3d67193e540098110afd821694 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.PYu 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b812ee7ccdd5848120facd3d67193e540098110afd821694 0 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b812ee7ccdd5848120facd3d67193e540098110afd821694 0 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b812ee7ccdd5848120facd3d67193e540098110afd821694 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.PYu 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.PYu 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.PYu 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2a1323afcbf74cf1b1d1d5c95af5e3273b7b8519f083dd64 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.MlA 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2a1323afcbf74cf1b1d1d5c95af5e3273b7b8519f083dd64 2 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2a1323afcbf74cf1b1d1d5c95af5e3273b7b8519f083dd64 2 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2a1323afcbf74cf1b1d1d5c95af5e3273b7b8519f083dd64 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.MlA 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.MlA 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.MlA 00:35:27.939 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b8dd24538ee7792351dcf5f5baa5bd6b 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.5Yq 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b8dd24538ee7792351dcf5f5baa5bd6b 1 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b8dd24538ee7792351dcf5f5baa5bd6b 1 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b8dd24538ee7792351dcf5f5baa5bd6b 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:27.940 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.5Yq 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.5Yq 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5Yq 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=90d96c5254bf22039eb57698af30cbd6 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.ZzA 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 90d96c5254bf22039eb57698af30cbd6 1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 90d96c5254bf22039eb57698af30cbd6 1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=90d96c5254bf22039eb57698af30cbd6 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.ZzA 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.ZzA 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZzA 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=03a2570ea292ddc618b4cf30ba0b41bbc3df40ebbf20f134 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Sb3 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 03a2570ea292ddc618b4cf30ba0b41bbc3df40ebbf20f134 2 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 03a2570ea292ddc618b4cf30ba0b41bbc3df40ebbf20f134 2 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=03a2570ea292ddc618b4cf30ba0b41bbc3df40ebbf20f134 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Sb3 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Sb3 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Sb3 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4cd232bf64cfe62a1cf24f7e67fc7ecb 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.5qB 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4cd232bf64cfe62a1cf24f7e67fc7ecb 0 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4cd232bf64cfe62a1cf24f7e67fc7ecb 0 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4cd232bf64cfe62a1cf24f7e67fc7ecb 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.5qB 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.5qB 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5qB 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=cc21abcde232cf9b28050527f0c9d07971f41515bdcae4c1fb8e22b828c778eb 00:35:28.201 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.uBf 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key cc21abcde232cf9b28050527f0c9d07971f41515bdcae4c1fb8e22b828c778eb 3 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 cc21abcde232cf9b28050527f0c9d07971f41515bdcae4c1fb8e22b828c778eb 3 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=cc21abcde232cf9b28050527f0c9d07971f41515bdcae4c1fb8e22b828c778eb 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.uBf 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.uBf 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.uBf 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3742760 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3742760 ']' 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NFn 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.biC ]] 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.biC 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.PYu 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.MlA ]] 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MlA 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.462 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5Yq 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZzA ]] 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZzA 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.723 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Sb3 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5qB ]] 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5qB 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.724 22:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.uBf 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:28.724 22:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:32.025 Waiting for block devices as requested 00:35:32.025 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:32.025 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:32.285 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:32.285 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:32.285 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:32.545 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:32.545 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:32.545 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:32.545 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:32.805 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:32.805 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:33.065 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:33.065 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:33.065 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:33.065 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:33.325 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:33.325 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:34.268 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:34.269 No valid GPT data, bailing 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:34.269 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:35:34.530 00:35:34.530 Discovery Log Number of Records 2, Generation counter 2 00:35:34.530 =====Discovery Log Entry 0====== 00:35:34.530 trtype: tcp 00:35:34.530 adrfam: ipv4 00:35:34.530 subtype: current discovery subsystem 00:35:34.530 treq: not specified, sq flow control disable supported 00:35:34.530 portid: 1 00:35:34.530 trsvcid: 4420 00:35:34.530 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:34.530 traddr: 10.0.0.1 00:35:34.530 eflags: none 00:35:34.530 sectype: none 00:35:34.530 =====Discovery Log Entry 1====== 00:35:34.530 trtype: tcp 00:35:34.530 adrfam: ipv4 00:35:34.530 subtype: nvme subsystem 00:35:34.530 treq: not specified, sq flow control disable supported 00:35:34.530 portid: 1 00:35:34.530 trsvcid: 4420 00:35:34.530 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:34.530 traddr: 10.0.0.1 00:35:34.530 eflags: none 00:35:34.530 sectype: none 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.530 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.531 nvme0n1 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.531 22:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:34.531 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 nvme0n1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.793 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.054 nvme0n1 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:35.054 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.055 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.316 nvme0n1 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.316 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.577 nvme0n1 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.577 22:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.838 nvme0n1 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.838 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.098 nvme0n1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.098 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.359 nvme0n1 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.359 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.620 nvme0n1 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.620 22:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.881 nvme0n1 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.881 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.142 nvme0n1 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.142 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.403 nvme0n1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.403 22:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.663 nvme0n1 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.663 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.925 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.186 nvme0n1 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.186 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.446 nvme0n1 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.446 22:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.706 nvme0n1 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.706 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.971 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.267 nvme0n1 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.267 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:39.268 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.268 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.268 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.268 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.268 22:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.905 nvme0n1 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.905 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.171 nvme0n1 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.171 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.432 22:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.693 nvme0n1 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.693 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.954 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 nvme0n1 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.214 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.475 22:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.046 nvme0n1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.047 22:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.618 nvme0n1 00:35:42.618 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.618 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.618 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.618 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.618 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.878 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.449 nvme0n1 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.449 22:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.388 nvme0n1 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.388 22:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.960 nvme0n1 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.960 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.221 nvme0n1 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:45.221 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.222 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.483 nvme0n1 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.483 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 nvme0n1 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 22:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 nvme0n1 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.744 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.006 nvme0n1 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.006 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.267 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.268 nvme0n1 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.268 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.529 nvme0n1 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.529 22:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.791 nvme0n1 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.791 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.053 nvme0n1 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.053 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.314 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.315 nvme0n1 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.315 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.576 22:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.838 nvme0n1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.838 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.099 nvme0n1 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.099 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.360 nvme0n1 00:35:48.360 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.360 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.360 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.360 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.361 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.622 22:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.883 nvme0n1 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.883 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.144 nvme0n1 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.144 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.715 nvme0n1 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.715 22:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.715 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.715 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.716 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.976 nvme0n1 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.976 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.237 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.497 nvme0n1 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.497 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.758 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.758 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.758 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.758 22:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.758 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 nvme0n1 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.019 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.280 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.540 nvme0n1 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.540 22:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.540 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.801 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.372 nvme0n1 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:52.372 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.373 22:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.944 nvme0n1 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.944 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.204 22:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.775 nvme0n1 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:53.775 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:53.776 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.776 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.776 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.720 nvme0n1 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.720 22:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.292 nvme0n1 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.292 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.553 nvme0n1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.553 22:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.553 nvme0n1 00:35:55.553 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.553 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.553 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.553 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.554 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:55.814 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 nvme0n1 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.815 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 nvme0n1 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.075 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.336 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.336 nvme0n1 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.337 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.597 nvme0n1 00:35:56.598 22:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.598 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.859 nvme0n1 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.859 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.120 nvme0n1 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.120 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.381 nvme0n1 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.381 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.641 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.641 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.641 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.641 22:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.641 nvme0n1 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.641 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.902 nvme0n1 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.902 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.163 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.423 nvme0n1 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:35:58.423 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.424 22:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.684 nvme0n1 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.684 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.944 nvme0n1 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.944 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.206 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.467 nvme0n1 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.467 22:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.037 nvme0n1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.037 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.298 nvme0n1 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.298 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.558 22:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 nvme0n1 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.819 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.080 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.081 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.341 nvme0n1 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.341 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.342 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.602 22:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.863 nvme0n1 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGMyZDMzZGMxNjA3OGRjNzEzODYwMzBiZjljMmJhMzbKX3p/: 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODJkZmQ2YzI2YmRlMDc0YzZiNjU1NWUzZjk2NTFmNGFmYmE0MjZjMjNkMjY5ZjM3MjU4OWExODBjZmJkZTFlZChOEnw=: 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.863 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.804 nvme0n1 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.804 22:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.804 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.376 nvme0n1 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.376 22:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.946 nvme0n1 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.946 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNhMjU3MGVhMjkyZGRjNjE4YjRjZjMwYmEwYjQxYmJjM2RmNDBlYmJmMjBmMTM0ZNec/w==: 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGNkMjMyYmY2NGNmZTYyYTFjZjI0ZjdlNjdmYzdlY2LHHRXa: 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.207 22:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.777 nvme0n1 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MyMWFiY2RlMjMyY2Y5YjI4MDUwNTI3ZjBjOWQwNzk3MWY0MTUxNWJkY2FlNGMxZmI4ZTIyYjgyOGM3NzhlYhweaNk=: 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.778 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.350 nvme0n1 00:36:05.350 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.350 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.350 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.350 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.350 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.611 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.611 request: 00:36:05.611 { 00:36:05.611 "name": "nvme0", 00:36:05.611 "trtype": "tcp", 00:36:05.611 "traddr": "10.0.0.1", 00:36:05.611 "adrfam": "ipv4", 00:36:05.611 "trsvcid": "4420", 00:36:05.611 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.611 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.611 "prchk_reftag": false, 00:36:05.611 "prchk_guard": false, 00:36:05.611 "hdgst": false, 00:36:05.611 "ddgst": false, 00:36:05.611 "allow_unrecognized_csi": false, 00:36:05.611 "method": "bdev_nvme_attach_controller", 00:36:05.611 "req_id": 1 00:36:05.611 } 00:36:05.611 Got JSON-RPC error response 00:36:05.611 response: 00:36:05.611 { 00:36:05.611 "code": -5, 00:36:05.612 "message": "Input/output error" 00:36:05.612 } 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.612 22:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.612 request: 00:36:05.612 { 00:36:05.612 "name": "nvme0", 00:36:05.612 "trtype": "tcp", 00:36:05.612 "traddr": "10.0.0.1", 00:36:05.612 "adrfam": "ipv4", 00:36:05.612 "trsvcid": "4420", 00:36:05.612 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.612 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.612 "prchk_reftag": false, 00:36:05.612 "prchk_guard": false, 00:36:05.612 "hdgst": false, 00:36:05.612 "ddgst": false, 00:36:05.612 "dhchap_key": "key2", 00:36:05.612 "allow_unrecognized_csi": false, 00:36:05.612 "method": "bdev_nvme_attach_controller", 00:36:05.612 "req_id": 1 00:36:05.612 } 00:36:05.612 Got JSON-RPC error response 00:36:05.612 response: 00:36:05.612 { 00:36:05.612 "code": -5, 00:36:05.612 "message": "Input/output error" 00:36:05.612 } 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.612 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.873 request: 00:36:05.873 { 00:36:05.873 "name": "nvme0", 00:36:05.873 "trtype": "tcp", 00:36:05.873 "traddr": "10.0.0.1", 00:36:05.873 "adrfam": "ipv4", 00:36:05.873 "trsvcid": "4420", 00:36:05.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.873 "prchk_reftag": false, 00:36:05.873 "prchk_guard": false, 00:36:05.873 "hdgst": false, 00:36:05.873 "ddgst": false, 00:36:05.873 "dhchap_key": "key1", 00:36:05.873 "dhchap_ctrlr_key": "ckey2", 00:36:05.873 "allow_unrecognized_csi": false, 00:36:05.873 "method": "bdev_nvme_attach_controller", 00:36:05.873 "req_id": 1 00:36:05.873 } 00:36:05.873 Got JSON-RPC error response 00:36:05.873 response: 00:36:05.873 { 00:36:05.873 "code": -5, 00:36:05.873 "message": "Input/output error" 00:36:05.873 } 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.873 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.134 nvme0n1 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.134 request: 00:36:06.134 { 00:36:06.134 "name": "nvme0", 00:36:06.134 "dhchap_key": "key1", 00:36:06.134 "dhchap_ctrlr_key": "ckey2", 00:36:06.134 "method": "bdev_nvme_set_keys", 00:36:06.134 "req_id": 1 00:36:06.134 } 00:36:06.134 Got JSON-RPC error response 00:36:06.134 response: 00:36:06.134 { 00:36:06.134 "code": -13, 00:36:06.134 "message": "Permission denied" 00:36:06.134 } 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:06.134 22:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:07.518 22:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxMmVlN2NjZGQ1ODQ4MTIwZmFjZDNkNjcxOTNlNTQwMDk4MTEwYWZkODIxNjk01yAPvw==: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmExMzIzYWZjYmY3NGNmMWIxZDFkNWM5NWFmNWUzMjczYjdiODUxOWYwODNkZDY0/7NpfA==: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.460 nvme0n1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjhkZDI0NTM4ZWU3NzkyMzUxZGNmNWY1YmFhNWJkNmJ3GhMX: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTBkOTZjNTI1NGJmMjIwMzllYjU3Njk4YWYzMGNiZDYDP/FO: 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.460 request: 00:36:08.460 { 00:36:08.460 "name": "nvme0", 00:36:08.460 "dhchap_key": "key2", 00:36:08.460 "dhchap_ctrlr_key": "ckey1", 00:36:08.460 "method": "bdev_nvme_set_keys", 00:36:08.460 "req_id": 1 00:36:08.460 } 00:36:08.460 Got JSON-RPC error response 00:36:08.460 response: 00:36:08.460 { 00:36:08.460 "code": -13, 00:36:08.460 "message": "Permission denied" 00:36:08.460 } 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.460 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.721 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.721 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:08.721 22:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:09.662 22:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.662 22:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:09.662 22:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.662 22:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.662 22:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.662 rmmod nvme_tcp 00:36:09.662 rmmod nvme_fabrics 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 3742760 ']' 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 3742760 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3742760 ']' 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3742760 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3742760 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:09.662 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3742760' 00:36:09.922 killing process with pid 3742760 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3742760 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3742760 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.922 22:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.834 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:12.094 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:36:12.095 22:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:15.398 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:15.398 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:15.658 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:15.918 22:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NFn /tmp/spdk.key-null.PYu /tmp/spdk.key-sha256.5Yq /tmp/spdk.key-sha384.Sb3 /tmp/spdk.key-sha512.uBf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:16.179 22:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:19.480 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:19.480 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:19.480 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:20.051 00:36:20.051 real 1m0.620s 00:36:20.051 user 0m54.435s 00:36:20.051 sys 0m16.005s 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.051 ************************************ 00:36:20.051 END TEST nvmf_auth_host 00:36:20.051 ************************************ 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.051 ************************************ 00:36:20.051 START TEST nvmf_digest 00:36:20.051 ************************************ 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:20.051 * Looking for test storage... 00:36:20.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:20.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.051 --rc genhtml_branch_coverage=1 00:36:20.051 --rc genhtml_function_coverage=1 00:36:20.051 --rc genhtml_legend=1 00:36:20.051 --rc geninfo_all_blocks=1 00:36:20.051 --rc geninfo_unexecuted_blocks=1 00:36:20.051 00:36:20.051 ' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:20.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.051 --rc genhtml_branch_coverage=1 00:36:20.051 --rc genhtml_function_coverage=1 00:36:20.051 --rc genhtml_legend=1 00:36:20.051 --rc geninfo_all_blocks=1 00:36:20.051 --rc geninfo_unexecuted_blocks=1 00:36:20.051 00:36:20.051 ' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:20.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.051 --rc genhtml_branch_coverage=1 00:36:20.051 --rc genhtml_function_coverage=1 00:36:20.051 --rc genhtml_legend=1 00:36:20.051 --rc geninfo_all_blocks=1 00:36:20.051 --rc geninfo_unexecuted_blocks=1 00:36:20.051 00:36:20.051 ' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:20.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.051 --rc genhtml_branch_coverage=1 00:36:20.051 --rc genhtml_function_coverage=1 00:36:20.051 --rc genhtml_legend=1 00:36:20.051 --rc geninfo_all_blocks=1 00:36:20.051 --rc geninfo_unexecuted_blocks=1 00:36:20.051 00:36:20.051 ' 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.051 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.318 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:20.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:20.319 22:24:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:28.565 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:28.565 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:28.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ up == up ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:28.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # is_hw=yes 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:28.565 22:24:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:28.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:36:28.565 00:36:28.565 --- 10.0.0.2 ping statistics --- 00:36:28.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.565 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:36:28.565 00:36:28.565 --- 10.0.0.1 ping statistics --- 00:36:28.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.565 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # return 0 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 ************************************ 00:36:28.565 START TEST nvmf_digest_clean 00:36:28.565 ************************************ 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=3760332 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 3760332 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3760332 ']' 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 [2024-10-12 22:24:46.239674] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:28.565 [2024-10-12 22:24:46.239737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.565 [2024-10-12 22:24:46.307240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.565 [2024-10-12 22:24:46.349964] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.565 [2024-10-12 22:24:46.350013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.565 [2024-10-12 22:24:46.350019] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.565 [2024-10-12 22:24:46.350024] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.565 [2024-10-12 22:24:46.350029] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.565 [2024-10-12 22:24:46.350054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:28.565 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.566 null0 00:36:28.566 [2024-10-12 22:24:46.549833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.566 [2024-10-12 22:24:46.574151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3760355 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3760355 /var/tmp/bperf.sock 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3760355 ']' 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:28.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:28.566 22:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.566 [2024-10-12 22:24:46.635905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:28.566 [2024-10-12 22:24:46.635967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3760355 ] 00:36:28.566 [2024-10-12 22:24:46.717200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.566 [2024-10-12 22:24:46.763615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.137 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:29.137 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:29.137 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:29.137 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:29.137 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:29.398 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:29.398 22:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:29.658 nvme0n1 00:36:29.658 22:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:29.658 22:24:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:29.919 Running I/O for 2 seconds... 00:36:31.804 19668.00 IOPS, 76.83 MiB/s [2024-10-12T20:24:50.293Z] 19651.50 IOPS, 76.76 MiB/s 00:36:31.804 Latency(us) 00:36:31.804 [2024-10-12T20:24:50.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:31.804 nvme0n1 : 2.01 19647.49 76.75 0.00 0.00 6506.17 3495.25 17803.95 00:36:31.804 [2024-10-12T20:24:50.293Z] =================================================================================================================== 00:36:31.804 [2024-10-12T20:24:50.293Z] Total : 19647.49 76.75 0.00 0.00 6506.17 3495.25 17803.95 00:36:31.804 { 00:36:31.804 "results": [ 00:36:31.804 { 00:36:31.804 "job": "nvme0n1", 00:36:31.804 "core_mask": "0x2", 00:36:31.804 "workload": "randread", 00:36:31.804 "status": "finished", 00:36:31.804 "queue_depth": 128, 00:36:31.804 "io_size": 4096, 00:36:31.804 "runtime": 2.005396, 00:36:31.804 "iops": 19647.49106909558, 00:36:31.804 "mibps": 76.74801198865461, 00:36:31.804 "io_failed": 0, 00:36:31.804 "io_timeout": 0, 00:36:31.804 "avg_latency_us": 6506.174583047808, 00:36:31.804 "min_latency_us": 3495.2533333333336, 00:36:31.804 "max_latency_us": 17803.946666666667 00:36:31.804 } 00:36:31.804 ], 00:36:31.804 "core_count": 1 00:36:31.804 } 00:36:31.804 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:31.804 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:31.804 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:31.804 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:31.804 | select(.opcode=="crc32c") 00:36:31.804 | "\(.module_name) \(.executed)"' 00:36:31.804 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3760355 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3760355 ']' 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3760355 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3760355 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3760355' 00:36:32.066 killing process with pid 3760355 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3760355 00:36:32.066 Received shutdown signal, test time was about 2.000000 seconds 00:36:32.066 00:36:32.066 Latency(us) 00:36:32.066 [2024-10-12T20:24:50.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.066 [2024-10-12T20:24:50.555Z] =================================================================================================================== 00:36:32.066 [2024-10-12T20:24:50.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:32.066 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3760355 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3761040 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3761040 /var/tmp/bperf.sock 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3761040 ']' 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:32.328 22:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:32.328 [2024-10-12 22:24:50.633758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:32.328 [2024-10-12 22:24:50.633817] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761040 ] 00:36:32.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:32.328 Zero copy mechanism will not be used. 00:36:32.328 [2024-10-12 22:24:50.711297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.328 [2024-10-12 22:24:50.738343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.270 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.530 nvme0n1 00:36:33.530 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:33.530 22:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:33.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:33.790 Zero copy mechanism will not be used. 00:36:33.790 Running I/O for 2 seconds... 00:36:35.676 4312.00 IOPS, 539.00 MiB/s [2024-10-12T20:24:54.165Z] 4268.50 IOPS, 533.56 MiB/s 00:36:35.676 Latency(us) 00:36:35.676 [2024-10-12T20:24:54.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.676 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:35.676 nvme0n1 : 2.01 4263.40 532.92 0.00 0.00 3750.23 593.92 14745.60 00:36:35.676 [2024-10-12T20:24:54.165Z] =================================================================================================================== 00:36:35.676 [2024-10-12T20:24:54.165Z] Total : 4263.40 532.92 0.00 0.00 3750.23 593.92 14745.60 00:36:35.676 { 00:36:35.676 "results": [ 00:36:35.676 { 00:36:35.676 "job": "nvme0n1", 00:36:35.676 "core_mask": "0x2", 00:36:35.676 "workload": "randread", 00:36:35.676 "status": "finished", 00:36:35.676 "queue_depth": 16, 00:36:35.676 "io_size": 131072, 00:36:35.676 "runtime": 2.006146, 00:36:35.676 "iops": 4263.398576175413, 00:36:35.676 "mibps": 532.9248220219266, 00:36:35.676 "io_failed": 0, 00:36:35.676 "io_timeout": 0, 00:36:35.676 "avg_latency_us": 3750.2338828481234, 00:36:35.676 "min_latency_us": 593.92, 00:36:35.676 "max_latency_us": 14745.6 00:36:35.676 } 00:36:35.676 ], 00:36:35.676 "core_count": 1 00:36:35.676 } 00:36:35.676 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:35.676 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:35.676 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:35.676 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:35.676 | select(.opcode=="crc32c") 00:36:35.676 | "\(.module_name) \(.executed)"' 00:36:35.676 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3761040 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3761040 ']' 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3761040 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3761040 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3761040' 00:36:35.937 killing process with pid 3761040 00:36:35.937 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3761040 00:36:35.937 Received shutdown signal, test time was about 2.000000 seconds 00:36:35.937 00:36:35.937 Latency(us) 00:36:35.937 [2024-10-12T20:24:54.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.938 [2024-10-12T20:24:54.427Z] =================================================================================================================== 00:36:35.938 [2024-10-12T20:24:54.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:35.938 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3761040 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3761764 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3761764 /var/tmp/bperf.sock 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3761764 ']' 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:36.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.198 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 [2024-10-12 22:24:54.488863] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:36.199 [2024-10-12 22:24:54.488918] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761764 ] 00:36:36.199 [2024-10-12 22:24:54.563467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.199 [2024-10-12 22:24:54.591528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.199 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.199 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:36.199 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:36.199 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:36.199 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:36.458 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.458 22:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.719 nvme0n1 00:36:36.719 22:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:36.719 22:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.980 Running I/O for 2 seconds... 00:36:38.865 30222.00 IOPS, 118.05 MiB/s [2024-10-12T20:24:57.354Z] 30367.00 IOPS, 118.62 MiB/s 00:36:38.865 Latency(us) 00:36:38.865 [2024-10-12T20:24:57.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:38.865 nvme0n1 : 2.01 30371.14 118.64 0.00 0.00 4209.31 2170.88 12943.36 00:36:38.865 [2024-10-12T20:24:57.354Z] =================================================================================================================== 00:36:38.865 [2024-10-12T20:24:57.354Z] Total : 30371.14 118.64 0.00 0.00 4209.31 2170.88 12943.36 00:36:38.865 { 00:36:38.865 "results": [ 00:36:38.865 { 00:36:38.865 "job": "nvme0n1", 00:36:38.865 "core_mask": "0x2", 00:36:38.865 "workload": "randwrite", 00:36:38.865 "status": "finished", 00:36:38.865 "queue_depth": 128, 00:36:38.865 "io_size": 4096, 00:36:38.865 "runtime": 2.005983, 00:36:38.865 "iops": 30371.14472056842, 00:36:38.865 "mibps": 118.63728406472039, 00:36:38.865 "io_failed": 0, 00:36:38.865 "io_timeout": 0, 00:36:38.865 "avg_latency_us": 4209.305814019653, 00:36:38.865 "min_latency_us": 2170.88, 00:36:38.865 "max_latency_us": 12943.36 00:36:38.865 } 00:36:38.865 ], 00:36:38.865 "core_count": 1 00:36:38.865 } 00:36:38.865 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:38.865 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:38.865 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:38.865 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:38.865 | select(.opcode=="crc32c") 00:36:38.865 | "\(.module_name) \(.executed)"' 00:36:38.865 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3761764 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3761764 ']' 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3761764 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3761764 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3761764' 00:36:39.126 killing process with pid 3761764 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3761764 00:36:39.126 Received shutdown signal, test time was about 2.000000 seconds 00:36:39.126 00:36:39.126 Latency(us) 00:36:39.126 [2024-10-12T20:24:57.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.126 [2024-10-12T20:24:57.615Z] =================================================================================================================== 00:36:39.126 [2024-10-12T20:24:57.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.126 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3761764 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3762418 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3762418 /var/tmp/bperf.sock 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3762418 ']' 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:39.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:39.387 [2024-10-12 22:24:57.713926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:39.387 [2024-10-12 22:24:57.713983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762418 ] 00:36:39.387 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.387 Zero copy mechanism will not be used. 00:36:39.387 [2024-10-12 22:24:57.789070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.387 [2024-10-12 22:24:57.816416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:39.387 22:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:39.657 22:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.657 22:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:40.229 nvme0n1 00:36:40.229 22:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:40.229 22:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:40.229 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:40.229 Zero copy mechanism will not be used. 00:36:40.229 Running I/O for 2 seconds... 00:36:42.115 4007.00 IOPS, 500.88 MiB/s [2024-10-12T20:25:00.604Z] 5657.00 IOPS, 707.12 MiB/s 00:36:42.115 Latency(us) 00:36:42.115 [2024-10-12T20:25:00.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:42.115 nvme0n1 : 2.00 5657.97 707.25 0.00 0.00 2824.43 1235.63 6553.60 00:36:42.115 [2024-10-12T20:25:00.604Z] =================================================================================================================== 00:36:42.115 [2024-10-12T20:25:00.604Z] Total : 5657.97 707.25 0.00 0.00 2824.43 1235.63 6553.60 00:36:42.115 { 00:36:42.115 "results": [ 00:36:42.115 { 00:36:42.115 "job": "nvme0n1", 00:36:42.115 "core_mask": "0x2", 00:36:42.115 "workload": "randwrite", 00:36:42.115 "status": "finished", 00:36:42.115 "queue_depth": 16, 00:36:42.115 "io_size": 131072, 00:36:42.115 "runtime": 2.002485, 00:36:42.115 "iops": 5657.969972309405, 00:36:42.115 "mibps": 707.2462465386757, 00:36:42.115 "io_failed": 0, 00:36:42.115 "io_timeout": 0, 00:36:42.115 "avg_latency_us": 2824.4290956163577, 00:36:42.115 "min_latency_us": 1235.6266666666668, 00:36:42.115 "max_latency_us": 6553.6 00:36:42.115 } 00:36:42.115 ], 00:36:42.115 "core_count": 1 00:36:42.115 } 00:36:42.115 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:42.115 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:42.376 | select(.opcode=="crc32c") 00:36:42.376 | "\(.module_name) \(.executed)"' 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3762418 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3762418 ']' 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3762418 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3762418 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3762418' 00:36:42.376 killing process with pid 3762418 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3762418 00:36:42.376 Received shutdown signal, test time was about 2.000000 seconds 00:36:42.376 00:36:42.376 Latency(us) 00:36:42.376 [2024-10-12T20:25:00.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.376 [2024-10-12T20:25:00.865Z] =================================================================================================================== 00:36:42.376 [2024-10-12T20:25:00.865Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.376 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3762418 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3760332 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3760332 ']' 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3760332 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:42.637 22:25:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3760332 00:36:42.637 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:42.637 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:42.637 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3760332' 00:36:42.637 killing process with pid 3760332 00:36:42.637 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3760332 00:36:42.637 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3760332 00:36:42.898 00:36:42.898 real 0m14.971s 00:36:42.898 user 0m29.821s 00:36:42.898 sys 0m3.705s 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:42.898 ************************************ 00:36:42.898 END TEST nvmf_digest_clean 00:36:42.898 ************************************ 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:42.898 ************************************ 00:36:42.898 START TEST nvmf_digest_error 00:36:42.898 ************************************ 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=3763121 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 3763121 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3763121 ']' 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:42.898 22:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.898 [2024-10-12 22:25:01.285543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:42.898 [2024-10-12 22:25:01.285591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.898 [2024-10-12 22:25:01.367959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.160 [2024-10-12 22:25:01.395702] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.160 [2024-10-12 22:25:01.395730] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.160 [2024-10-12 22:25:01.395735] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.160 [2024-10-12 22:25:01.395740] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.160 [2024-10-12 22:25:01.395744] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.160 [2024-10-12 22:25:01.395759] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.732 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.733 [2024-10-12 22:25:02.105785] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.733 null0 00:36:43.733 [2024-10-12 22:25:02.177760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.733 [2024-10-12 22:25:02.201943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3763429 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3763429 /var/tmp/bperf.sock 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3763429 ']' 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.733 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.994 [2024-10-12 22:25:02.258446] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:43.994 [2024-10-12 22:25:02.258495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763429 ] 00:36:43.994 [2024-10-12 22:25:02.333057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.994 [2024-10-12 22:25:02.361314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.994 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:43.994 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:43.994 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:43.994 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.254 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:44.515 nvme0n1 00:36:44.515 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:44.515 22:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.515 22:25:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.777 22:25:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.777 22:25:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:44.777 22:25:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:44.777 Running I/O for 2 seconds... 00:36:44.777 [2024-10-12 22:25:03.115450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.115480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.115489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.124122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.124146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.124153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.133862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.133880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.133887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.142378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.142396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.142403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.151532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.151549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.151557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.159995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.160019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.169833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.169851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.169857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.179469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.179487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.179493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.188774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.188791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.188797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.197439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.197456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.197462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.205959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.205976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.205983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.215788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.215812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.224022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.224040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.224047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.233785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.233803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.242066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.242083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.242090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.251302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.251319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.251326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:44.777 [2024-10-12 22:25:03.259868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:44.777 [2024-10-12 22:25:03.259886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.777 [2024-10-12 22:25:03.259892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.269338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.269356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.269363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.278359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.278377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.278389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.287188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.296363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.296381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.296387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.305873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.305891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.305898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.314553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.314571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.314578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.321916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.321934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.321941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.332332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.332350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.332357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.341110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.341128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.341134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.350136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.350154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.350160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.358740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.358758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.358764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.367748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.367766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.367772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.377973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.377990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.377996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.387575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.387593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.387599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.395780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.395797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.395804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.405927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.039 [2024-10-12 22:25:03.405944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.039 [2024-10-12 22:25:03.405950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.039 [2024-10-12 22:25:03.415885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.415903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.415909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.424783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.424801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.424808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.432983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.433001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.433011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.445391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.445409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.445415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.455781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.455798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.465376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.465394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.465400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.473209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.473227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.473233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.483138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.483156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.483163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.491188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.491206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.491212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.500666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.500683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.500690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.510356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.510380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.040 [2024-10-12 22:25:03.518032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.040 [2024-10-12 22:25:03.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.040 [2024-10-12 22:25:03.518060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.527747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.527765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.527772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.537481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.537499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.537505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.547035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.547053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.547059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.555980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.555998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.556004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.563982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.564000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.564006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.573458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.573475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.581956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.581973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.581980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.590963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.590980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.590987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.599717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.599734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.599740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.608671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.608689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.618844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.618862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.618868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.626140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.626158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.626164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.302 [2024-10-12 22:25:03.636413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.302 [2024-10-12 22:25:03.636431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.302 [2024-10-12 22:25:03.636438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.645922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.645940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.645946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.655352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.655369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.655376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.664522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.664540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.673347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.673365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.673375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.681885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.681902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.681908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.690581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.690599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.699347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.699364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.699371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.708491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.708509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.708515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.717374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.717392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.717398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.727825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.727842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.727848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.736636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.736654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.736661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.744470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.744487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.744494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.753773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.753793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.753800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.765095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.765115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.765121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.777185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.777202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.303 [2024-10-12 22:25:03.784600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.303 [2024-10-12 22:25:03.784618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.303 [2024-10-12 22:25:03.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.794950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.802664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.802681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.802688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.813976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.813994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.814001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.822851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.822869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.822875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.833082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.833100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.833111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.842135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.842159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.851045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.851062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.851069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.860019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.860036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.860042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.868721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.868738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.876863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.876881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.876888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.885797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.885815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.885821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.895504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.565 [2024-10-12 22:25:03.895522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.565 [2024-10-12 22:25:03.895529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.565 [2024-10-12 22:25:03.904460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.904477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.904483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.912779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.912797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.912807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.921474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.921491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.921498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.931676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.931694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.931700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.940691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.940709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.940716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.950024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.950047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.959274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.959290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.959297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.967711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.967729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.967735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.976861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.976878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.976885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.984893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.984911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.984917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:03.994527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:03.994545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:03.994551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.002279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.002298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.002305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.012159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.012177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.012184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.021742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.021760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.021766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.030413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.030430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.030437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.039024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.039042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.566 [2024-10-12 22:25:04.048407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.566 [2024-10-12 22:25:04.048424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.566 [2024-10-12 22:25:04.048431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.057460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.057478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.057484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.065972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.065990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.066000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.075041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.075065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.083882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.083899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.083905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.093223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.093240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.093247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 27693.00 IOPS, 108.18 MiB/s [2024-10-12T20:25:04.317Z] [2024-10-12 22:25:04.101942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.101959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.111472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.111495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.119423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.119440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.119446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.129132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.129149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.129156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.137807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.137824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.137830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.146647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.146668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.146675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.155606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.155623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.155630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.164159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.164176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.164183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.173138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.173156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.173162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.182781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.182806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.194744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.828 [2024-10-12 22:25:04.194761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.828 [2024-10-12 22:25:04.194768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.828 [2024-10-12 22:25:04.203531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.203548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.203555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.212826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.212850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.221369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.221386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.221393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.230575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.230593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.239593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.239610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.239616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.248712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.248728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.248734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.257039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.257055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.266216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.266233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.266239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.275308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.275325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.275331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.284745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.284762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.284769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.294056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.294072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.294079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.302785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.302802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.302811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:45.829 [2024-10-12 22:25:04.311406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:45.829 [2024-10-12 22:25:04.311423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:45.829 [2024-10-12 22:25:04.311430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.321019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.321037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.321043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.328260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.328277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.328284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.338672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.338689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.338696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.345963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.355757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.355774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.355780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.365126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.365142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.365149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.374087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.374109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.374116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.383126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.383143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.383150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.391510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.391527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.391533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.400575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.400592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.400598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.408781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.408804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.417627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.417644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.417650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.426844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.426861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.426867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.435058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.435075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.435081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.443696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.443713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.453466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.453483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.462080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.462097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.470765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.470781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.470788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.479419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.479436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.479443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.489275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.489292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.489299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.499992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.500009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.500016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.508660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.508677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.508684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.518328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.518345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.518351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.527527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.527545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.527551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.092 [2024-10-12 22:25:04.536801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.092 [2024-10-12 22:25:04.536822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.092 [2024-10-12 22:25:04.536829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.093 [2024-10-12 22:25:04.545203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.093 [2024-10-12 22:25:04.545220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.093 [2024-10-12 22:25:04.545227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.093 [2024-10-12 22:25:04.554080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.093 [2024-10-12 22:25:04.554097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.093 [2024-10-12 22:25:04.554108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.093 [2024-10-12 22:25:04.562478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.093 [2024-10-12 22:25:04.562495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.093 [2024-10-12 22:25:04.562501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.093 [2024-10-12 22:25:04.570467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.093 [2024-10-12 22:25:04.570484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.093 [2024-10-12 22:25:04.570491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.580160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.580177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.580184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.589309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.589326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.589333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.598140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.598157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.598164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.606696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.606713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.606719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.615067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.615084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.615090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.624556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.624573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.624579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.633782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.633799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.633806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.641049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.641066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.641072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.651334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.355 [2024-10-12 22:25:04.651351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.355 [2024-10-12 22:25:04.651357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.355 [2024-10-12 22:25:04.660915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.660932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.660939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.670971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.670987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.670994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.679562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.679579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.688302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.688319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.688329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.698162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.698179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.698185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.707330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.707353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.715702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.715718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.715725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.724320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.724337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.724344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.734101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.734122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.734128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.742908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.742924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.742931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.752610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.752627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.752633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.761696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.761713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.761719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.770289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.770310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.770317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.779605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.779622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.787691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.787708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.787714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.797176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.797192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.797199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.806281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.806298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.806304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.815061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.815078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.815085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.823749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.823766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.823772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.832966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.832983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.832990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.356 [2024-10-12 22:25:04.841354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.356 [2024-10-12 22:25:04.841371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.356 [2024-10-12 22:25:04.841378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.849834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.849851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.849858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.859621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.859638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.859644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.868083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.868101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.868112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.876964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.876981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.886336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.886354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.886360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.895508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.895525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.895532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.903992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.904009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.904016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.913764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.913782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.913788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.922015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.922032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.932027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.932045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.932051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.940775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.940792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.940799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.950628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.950645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.950652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.959747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.959764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.959770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.968727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.968744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.968751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.978131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.978149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.978155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.986818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.618 [2024-10-12 22:25:04.986835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.618 [2024-10-12 22:25:04.986841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.618 [2024-10-12 22:25:04.996199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:04.996216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:04.996222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.005629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.005646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.005653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.014462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.014486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.023644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.023662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.023668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.033258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.033274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.033281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.042485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.042502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.042508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.050488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.050505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.050511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.059635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.059652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.059659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.068726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.068743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.068749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.077800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.077817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.077826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.087308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.087325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.087332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.619 [2024-10-12 22:25:05.095823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.619 [2024-10-12 22:25:05.095840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.619 [2024-10-12 22:25:05.095847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.881 27986.00 IOPS, 109.32 MiB/s [2024-10-12T20:25:05.370Z] [2024-10-12 22:25:05.105322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc5aa90) 00:36:46.881 [2024-10-12 22:25:05.105340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:46.881 [2024-10-12 22:25:05.105346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:46.881 00:36:46.881 Latency(us) 00:36:46.881 [2024-10-12T20:25:05.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.881 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:46.881 nvme0n1 : 2.05 27441.78 107.19 0.00 0.00 4566.00 2252.80 44346.03 00:36:46.881 [2024-10-12T20:25:05.370Z] =================================================================================================================== 00:36:46.881 [2024-10-12T20:25:05.370Z] Total : 27441.78 107.19 0.00 0.00 4566.00 2252.80 44346.03 00:36:46.881 { 00:36:46.881 "results": [ 00:36:46.881 { 00:36:46.881 "job": "nvme0n1", 00:36:46.881 "core_mask": "0x2", 00:36:46.881 "workload": "randread", 00:36:46.881 "status": "finished", 00:36:46.881 "queue_depth": 128, 00:36:46.881 "io_size": 4096, 00:36:46.881 "runtime": 2.04564, 00:36:46.881 "iops": 27441.778612072507, 00:36:46.881 "mibps": 107.19444770340823, 00:36:46.881 "io_failed": 0, 00:36:46.881 "io_timeout": 0, 00:36:46.881 "avg_latency_us": 4566.001455512802, 00:36:46.881 "min_latency_us": 2252.8, 00:36:46.881 "max_latency_us": 44346.026666666665 00:36:46.881 } 00:36:46.881 ], 00:36:46.881 "core_count": 1 00:36:46.881 } 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:46.881 | .driver_specific 00:36:46.881 | .nvme_error 00:36:46.881 | .status_code 00:36:46.881 | .command_transient_transport_error' 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3763429 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3763429 ']' 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3763429 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:46.881 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3763429 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3763429' 00:36:47.142 killing process with pid 3763429 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3763429 00:36:47.142 Received shutdown signal, test time was about 2.000000 seconds 00:36:47.142 00:36:47.142 Latency(us) 00:36:47.142 [2024-10-12T20:25:05.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.142 [2024-10-12T20:25:05.631Z] =================================================================================================================== 00:36:47.142 [2024-10-12T20:25:05.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3763429 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3763993 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3763993 /var/tmp/bperf.sock 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3763993 ']' 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:47.142 22:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.142 [2024-10-12 22:25:05.571009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:47.142 [2024-10-12 22:25:05.571067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763993 ] 00:36:47.142 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:47.142 Zero copy mechanism will not be used. 00:36:47.403 [2024-10-12 22:25:05.647711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.403 [2024-10-12 22:25:05.675739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.974 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:47.974 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:47.974 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:47.974 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:48.235 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:48.497 nvme0n1 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:48.497 22:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:48.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:48.497 Zero copy mechanism will not be used. 00:36:48.497 Running I/O for 2 seconds... 00:36:48.497 [2024-10-12 22:25:06.892498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.497 [2024-10-12 22:25:06.892531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.497 [2024-10-12 22:25:06.892540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.497 [2024-10-12 22:25:06.903780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.497 [2024-10-12 22:25:06.903802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.497 [2024-10-12 22:25:06.903810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.497 [2024-10-12 22:25:06.916035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.497 [2024-10-12 22:25:06.916055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.497 [2024-10-12 22:25:06.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.497 [2024-10-12 22:25:06.928691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.497 [2024-10-12 22:25:06.928710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.498 [2024-10-12 22:25:06.928717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.498 [2024-10-12 22:25:06.941531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.498 [2024-10-12 22:25:06.941554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.498 [2024-10-12 22:25:06.941561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.498 [2024-10-12 22:25:06.953751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.498 [2024-10-12 22:25:06.953770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.498 [2024-10-12 22:25:06.953776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.498 [2024-10-12 22:25:06.966027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.498 [2024-10-12 22:25:06.966046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.498 [2024-10-12 22:25:06.966053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.498 [2024-10-12 22:25:06.978640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.498 [2024-10-12 22:25:06.978658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.498 [2024-10-12 22:25:06.978665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:06.990820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:06.990838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:06.990844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.003349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.003367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:07.003374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.014976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.014994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:07.015001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.025571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.025590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:07.025596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.034051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.034070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:07.034076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.043664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.043684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.759 [2024-10-12 22:25:07.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.759 [2024-10-12 22:25:07.053531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.759 [2024-10-12 22:25:07.053550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.053556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.064024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.064044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.064050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.074921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.074940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.085283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.085303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.085309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.095509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.095528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.103787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.103806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.103813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.112267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.112286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.112293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.123712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.123731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.123743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.134630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.146290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.146310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.146317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.157399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.157419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.157425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.168362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.168381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.168388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.178760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.178779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.178785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.189796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.189816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.189823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.199783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.199802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.199808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.208914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.208933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.208939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.219151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.219176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.229588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.229607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.229614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.760 [2024-10-12 22:25:07.237963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:48.760 [2024-10-12 22:25:07.237982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.760 [2024-10-12 22:25:07.237989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.249498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.249517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.249523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.260581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.260606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.272217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.272236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.272242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.282432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.282451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.282458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.293117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.293136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.305348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.305367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.305377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.316065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.316084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.316091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.326451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.326470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.326477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.337253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.337271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.337278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.347476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.347495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.347502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.357233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.357252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.357259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.369424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.369443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.369450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.380713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.380732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.380739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.393117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.393137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.393143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.405297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.405319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.405326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.417028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.417047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.417054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.429408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.429427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.429434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.440928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.440947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.440954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.453407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.453427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.453433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.465209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.465228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.465234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.477977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.477996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.478003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.490391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.490410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.490417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.023 [2024-10-12 22:25:07.502708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.023 [2024-10-12 22:25:07.502728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.023 [2024-10-12 22:25:07.502734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.514320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.514339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.525685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.525704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.525711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.536851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.536871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.536877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.548211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.548231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.548238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.560167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.560186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.560193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.572068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.572088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.572094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.583029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.583049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.583056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.595183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.595203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.595209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.608187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.608206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.608216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.620552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.620571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.633142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.633161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.633169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.646002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.646022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.646028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.658387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.658406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.658413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.670332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.670352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.682101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.682126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.682132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.694269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.694288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.286 [2024-10-12 22:25:07.694295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.286 [2024-10-12 22:25:07.705814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.286 [2024-10-12 22:25:07.705833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.714645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.714667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.714674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.726735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.726755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.726762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.737197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.737216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.737222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.746446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.746465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.746472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.757830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.757848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.757855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.287 [2024-10-12 22:25:07.768844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.287 [2024-10-12 22:25:07.768863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.287 [2024-10-12 22:25:07.768869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.780294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.780313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.780319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.792291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.792310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.792316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.801856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.801874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.801880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.811534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.811552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.811559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.822448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.822468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.822474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.831303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.831322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.831328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.842597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.842615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.842622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.852047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.549 [2024-10-12 22:25:07.852066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.549 [2024-10-12 22:25:07.852072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.549 [2024-10-12 22:25:07.863656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.863675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.863682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.871262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.871281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.871287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.550 2765.00 IOPS, 345.62 MiB/s [2024-10-12T20:25:08.039Z] [2024-10-12 22:25:07.882603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.882622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.882628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.893031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.893049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.893059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.901816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.901834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.901841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.907816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.907835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.907841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.917020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.917039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.917046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.927594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.927612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.927618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.937203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.937221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.937228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.947273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.947292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.957248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.957266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.957273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.968218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.968237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.968243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.979623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.979645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.979652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:07.990878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:07.990896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:07.990903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:08.001062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:08.001081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:08.001087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:08.012129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:08.012148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:08.012154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:08.023192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:08.023210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:08.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.550 [2024-10-12 22:25:08.034670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.550 [2024-10-12 22:25:08.034689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.550 [2024-10-12 22:25:08.034695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.043658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.043676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.043683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.053131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.053150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.053156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.064257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.064277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.064283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.076365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.076384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.076390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.088018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.088037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.088043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.097966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.097984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.097991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.104479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.104499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.104505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.115002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.115021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.115028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.126691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.126716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.137265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.147585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.147603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.147610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.158824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.158843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.158852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.169842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.169861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.169868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.811 [2024-10-12 22:25:08.182237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.811 [2024-10-12 22:25:08.182255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.811 [2024-10-12 22:25:08.182262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.194109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.194127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.194134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.206563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.206581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.206587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.218248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.218267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.218273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.230370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.230388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.230395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.241839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.241858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.241864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.252293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.252312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.252318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.263361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.263380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.263386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.272901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.272919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.272926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.279238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.279255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.279262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.812 [2024-10-12 22:25:08.287940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:49.812 [2024-10-12 22:25:08.287959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.812 [2024-10-12 22:25:08.287967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.298958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.298977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.298983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.308602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.308621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.308627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.319043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.319062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.319068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.328616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.328634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.328641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.339644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.339663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.339672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.351940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.351965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.363252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.363271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.363277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.375543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.375561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.375568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.388188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.388206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.388213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.399859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.399878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.411152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.411170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.411176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.423289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.423308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.423314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.435762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.435780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.435787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.446348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.446370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.074 [2024-10-12 22:25:08.446376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.074 [2024-10-12 22:25:08.455435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.074 [2024-10-12 22:25:08.455454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.455460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.464323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.464341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.464348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.473946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.473964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.473971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.483641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.483660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.483666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.494442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.494461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.494467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.503300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.503318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.503325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.512063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.512082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.521181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.521200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.521206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.527348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.527368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.527374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.536904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.536922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.536929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.546890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.546908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.075 [2024-10-12 22:25:08.558073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.075 [2024-10-12 22:25:08.558092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.075 [2024-10-12 22:25:08.558098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.568228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.568247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.568253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.578924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.578945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.578951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.588812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.588831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.588837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.597714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.597732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.597738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.604724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.604743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.611315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.611334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.611341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.620086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.620108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.620114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.624362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.624382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.624389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.632203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.632220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.632227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.637208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.637226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.641295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.641314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.641320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.650526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.650545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.650551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.660714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.660740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.670973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.670996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.671002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.680160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.680178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.680185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.690886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.690904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.690911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.700003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.700021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.700028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.705870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.705888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.705895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.711924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.711943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.711950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.338 [2024-10-12 22:25:08.721188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.338 [2024-10-12 22:25:08.721207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.338 [2024-10-12 22:25:08.721213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.731918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.731944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.741657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.741676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.741682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.746272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.746291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.746298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.754483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.754501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.754508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.759137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.759162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.769475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.769502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.779043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.779063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.779071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.784024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.784043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.784050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.791295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.791321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.798012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.798031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.798037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.802346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.802363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.802373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.810965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.810983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.810990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.339 [2024-10-12 22:25:08.817047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.339 [2024-10-12 22:25:08.817066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.339 [2024-10-12 22:25:08.817073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.826146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.826165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.826171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.830879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.830899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.830905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.835260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.835279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.835286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.839704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.839723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.844178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.844197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.844203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.849870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.849888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.849895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.858402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.858430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.866399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.866424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.870742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.870760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.870767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:50.603 [2024-10-12 22:25:08.875002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdd7730) 00:36:50.603 [2024-10-12 22:25:08.875020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:50.603 [2024-10-12 22:25:08.875027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:50.603 3075.00 IOPS, 384.38 MiB/s 00:36:50.603 Latency(us) 00:36:50.603 [2024-10-12T20:25:09.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:50.603 nvme0n1 : 2.00 3080.00 385.00 0.00 0.00 5191.83 576.85 14636.37 00:36:50.603 [2024-10-12T20:25:09.092Z] =================================================================================================================== 00:36:50.603 [2024-10-12T20:25:09.092Z] Total : 3080.00 385.00 0.00 0.00 5191.83 576.85 14636.37 00:36:50.603 { 00:36:50.603 "results": [ 00:36:50.603 { 00:36:50.603 "job": "nvme0n1", 00:36:50.603 "core_mask": "0x2", 00:36:50.603 "workload": "randread", 00:36:50.603 "status": "finished", 00:36:50.603 "queue_depth": 16, 00:36:50.603 "io_size": 131072, 00:36:50.603 "runtime": 2.001946, 00:36:50.603 "iops": 3080.0031569283087, 00:36:50.603 "mibps": 385.0003946160386, 00:36:50.603 "io_failed": 0, 00:36:50.603 "io_timeout": 0, 00:36:50.603 "avg_latency_us": 5191.828080873608, 00:36:50.604 "min_latency_us": 576.8533333333334, 00:36:50.604 "max_latency_us": 14636.373333333333 00:36:50.604 } 00:36:50.604 ], 00:36:50.604 "core_count": 1 00:36:50.604 } 00:36:50.604 22:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:50.604 22:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:50.604 22:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:50.604 | .driver_specific 00:36:50.604 | .nvme_error 00:36:50.604 | .status_code 00:36:50.604 | .command_transient_transport_error' 00:36:50.604 22:25:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3763993 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3763993 ']' 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3763993 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:50.604 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3763993 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3763993' 00:36:50.938 killing process with pid 3763993 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3763993 00:36:50.938 Received shutdown signal, test time was about 2.000000 seconds 00:36:50.938 00:36:50.938 Latency(us) 00:36:50.938 [2024-10-12T20:25:09.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.938 [2024-10-12T20:25:09.427Z] =================================================================================================================== 00:36:50.938 [2024-10-12T20:25:09.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3763993 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3764737 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3764737 /var/tmp/bperf.sock 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3764737 ']' 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:50.938 22:25:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.938 [2024-10-12 22:25:09.307405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:50.938 [2024-10-12 22:25:09.307463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764737 ] 00:36:50.938 [2024-10-12 22:25:09.385746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.938 [2024-10-12 22:25:09.413480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:51.905 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.476 nvme0n1 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:52.476 22:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:52.476 Running I/O for 2 seconds... 00:36:52.476 [2024-10-12 22:25:10.813747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e27f0 00:36:52.476 [2024-10-12 22:25:10.814711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.822406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.476 [2024-10-12 22:25:10.823327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.831303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.476 [2024-10-12 22:25:10.832171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.832188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.839935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.476 [2024-10-12 22:25:10.840801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.840819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.848552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.476 [2024-10-12 22:25:10.849419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.849436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.857130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.476 [2024-10-12 22:25:10.858017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.858034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.865693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.476 [2024-10-12 22:25:10.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.866590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.874255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.476 [2024-10-12 22:25:10.875131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.875148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.882834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.476 [2024-10-12 22:25:10.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.883736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.891407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.476 [2024-10-12 22:25:10.892290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.476 [2024-10-12 22:25:10.892306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.476 [2024-10-12 22:25:10.899979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.476 [2024-10-12 22:25:10.900855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.900873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.908535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.477 [2024-10-12 22:25:10.909407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.909424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.917054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.477 [2024-10-12 22:25:10.917927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.917943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.925636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.477 [2024-10-12 22:25:10.926479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.926496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.934223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.477 [2024-10-12 22:25:10.935096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.935117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.942768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.477 [2024-10-12 22:25:10.943647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.943664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.951322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.477 [2024-10-12 22:25:10.952177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.952194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.477 [2024-10-12 22:25:10.959848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.477 [2024-10-12 22:25:10.960731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.477 [2024-10-12 22:25:10.960747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:10.968405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:10.969243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:10.969259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:10.976969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:10.977887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:10.977904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:10.985555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:10.986392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:10.986409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:10.994078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:10.994950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:10.994971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.002638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.003507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.003524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.011162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:11.012015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.012032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.019714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:11.020597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.028256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.029142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.029158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.036804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:11.037657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.045370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:11.046239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.046255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.053929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.054773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.054789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.062461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:11.063316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.063333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.071014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:11.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.071902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.079562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.080399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.080416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.088286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:11.089133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.089150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.096843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:11.097709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.097725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.105370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.106247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.113908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.739 [2024-10-12 22:25:11.114780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.114797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.122455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.739 [2024-10-12 22:25:11.123306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.123323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.131017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.739 [2024-10-12 22:25:11.131894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.739 [2024-10-12 22:25:11.131911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.739 [2024-10-12 22:25:11.139554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.740 [2024-10-12 22:25:11.140407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.140423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.148089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.740 [2024-10-12 22:25:11.148987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.149004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.156613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.740 [2024-10-12 22:25:11.157451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.157468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.165182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.740 [2024-10-12 22:25:11.166026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.166042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.173720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.740 [2024-10-12 22:25:11.174609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.174626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.182255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.740 [2024-10-12 22:25:11.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.183143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.190796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5220 00:36:52.740 [2024-10-12 22:25:11.191638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.191654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.199335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:52.740 [2024-10-12 22:25:11.200180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.200197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.207886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ebb98 00:36:52.740 [2024-10-12 22:25:11.208758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.208775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.216737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4140 00:36:52.740 [2024-10-12 22:25:11.217816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.740 [2024-10-12 22:25:11.217835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:52.740 [2024-10-12 22:25:11.225329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f2d80 00:36:53.002 [2024-10-12 22:25:11.226376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.226392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.233901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5658 00:36:53.002 [2024-10-12 22:25:11.234923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.234939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.242385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.002 [2024-10-12 22:25:11.243441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.243456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.251071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0630 00:36:53.002 [2024-10-12 22:25:11.252130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.252146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.259764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f9b30 00:36:53.002 [2024-10-12 22:25:11.260816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.260832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.268292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eaef0 00:36:53.002 [2024-10-12 22:25:11.269312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.269328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.276822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7100 00:36:53.002 [2024-10-12 22:25:11.277870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.277886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.285419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df550 00:36:53.002 [2024-10-12 22:25:11.286473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.286489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.293908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd208 00:36:53.002 [2024-10-12 22:25:11.294951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.294970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.302461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0350 00:36:53.002 [2024-10-12 22:25:11.303506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.303522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.310978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.002 [2024-10-12 22:25:11.312040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.312057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.319503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.002 [2024-10-12 22:25:11.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.320537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.328011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:53.002 [2024-10-12 22:25:11.329061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.329077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.336543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:53.002 [2024-10-12 22:25:11.337620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.337636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.345090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:53.002 [2024-10-12 22:25:11.346155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.346171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.353632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.002 [2024-10-12 22:25:11.354686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.354702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.362145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:53.002 [2024-10-12 22:25:11.363186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.002 [2024-10-12 22:25:11.363203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.002 [2024-10-12 22:25:11.370659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:53.002 [2024-10-12 22:25:11.371661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.371678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.379158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:53.003 [2024-10-12 22:25:11.380214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.380230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.387678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5a90 00:36:53.003 [2024-10-12 22:25:11.388684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.388700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.396236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6458 00:36:53.003 [2024-10-12 22:25:11.397274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.397291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.404767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f9b30 00:36:53.003 [2024-10-12 22:25:11.405814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.405830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.413296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eaef0 00:36:53.003 [2024-10-12 22:25:11.414364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.414381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.421803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7100 00:36:53.003 [2024-10-12 22:25:11.422855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.422872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.430308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df550 00:36:53.003 [2024-10-12 22:25:11.431326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.431342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.438824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd208 00:36:53.003 [2024-10-12 22:25:11.439882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.439898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.447359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0350 00:36:53.003 [2024-10-12 22:25:11.448398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.448414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.455873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.003 [2024-10-12 22:25:11.456929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.456946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.464388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.003 [2024-10-12 22:25:11.465437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.465453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.472920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:53.003 [2024-10-12 22:25:11.473969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.473986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.003 [2024-10-12 22:25:11.481431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:53.003 [2024-10-12 22:25:11.482458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.003 [2024-10-12 22:25:11.482475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.264 [2024-10-12 22:25:11.490044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:53.264 [2024-10-12 22:25:11.491108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.264 [2024-10-12 22:25:11.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.264 [2024-10-12 22:25:11.498587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.264 [2024-10-12 22:25:11.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.264 [2024-10-12 22:25:11.499656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.264 [2024-10-12 22:25:11.507098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:53.264 [2024-10-12 22:25:11.508154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.264 [2024-10-12 22:25:11.508171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.264 [2024-10-12 22:25:11.515748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:53.264 [2024-10-12 22:25:11.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.264 [2024-10-12 22:25:11.516775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.264 [2024-10-12 22:25:11.524267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:53.265 [2024-10-12 22:25:11.525329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.525346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.532788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5a90 00:36:53.265 [2024-10-12 22:25:11.533846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.533863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.541316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6458 00:36:53.265 [2024-10-12 22:25:11.542358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.549849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f9b30 00:36:53.265 [2024-10-12 22:25:11.550890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.550906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.558370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eaef0 00:36:53.265 [2024-10-12 22:25:11.559427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.566858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7100 00:36:53.265 [2024-10-12 22:25:11.567920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.567936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.575411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df550 00:36:53.265 [2024-10-12 22:25:11.576468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.576484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.583945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd208 00:36:53.265 [2024-10-12 22:25:11.585004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.585020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.592473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0350 00:36:53.265 [2024-10-12 22:25:11.593543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.593559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.600989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.265 [2024-10-12 22:25:11.602052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.602068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.609495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.265 [2024-10-12 22:25:11.610507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.610523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.617997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:53.265 [2024-10-12 22:25:11.619043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.619060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.626529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:53.265 [2024-10-12 22:25:11.627571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.627587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.635036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:53.265 [2024-10-12 22:25:11.636041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.643562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.265 [2024-10-12 22:25:11.644607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.644623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.652073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:53.265 [2024-10-12 22:25:11.653133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.653150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.660573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:53.265 [2024-10-12 22:25:11.661589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.661606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.669084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:53.265 [2024-10-12 22:25:11.670142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.670159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.677614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5a90 00:36:53.265 [2024-10-12 22:25:11.678664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.678681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.686121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6458 00:36:53.265 [2024-10-12 22:25:11.687176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.687193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.694637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f9b30 00:36:53.265 [2024-10-12 22:25:11.695695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.703147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eaef0 00:36:53.265 [2024-10-12 22:25:11.704182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.704198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.711643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7100 00:36:53.265 [2024-10-12 22:25:11.712697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.712714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.720179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df550 00:36:53.265 [2024-10-12 22:25:11.721231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.721248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.728708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd208 00:36:53.265 [2024-10-12 22:25:11.729757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.729774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.737210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0350 00:36:53.265 [2024-10-12 22:25:11.738266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.738285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.265 [2024-10-12 22:25:11.745724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.265 [2024-10-12 22:25:11.746752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.265 [2024-10-12 22:25:11.746768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.754314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.527 [2024-10-12 22:25:11.755375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.755392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.762837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:53.527 [2024-10-12 22:25:11.763841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.763858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.771383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:53.527 [2024-10-12 22:25:11.772387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.772403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.779911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:53.527 [2024-10-12 22:25:11.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.780991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.788428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.527 [2024-10-12 22:25:11.789491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.789507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.796950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:53.527 [2024-10-12 22:25:11.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.798018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.805446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:53.527 [2024-10-12 22:25:11.806774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.806791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:36:53.527 29829.00 IOPS, 116.52 MiB/s [2024-10-12T20:25:12.016Z] [2024-10-12 22:25:11.813972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0bc0 00:36:53.527 [2024-10-12 22:25:11.815027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.815044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.822503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:53.527 [2024-10-12 22:25:11.823574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.823591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.831024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f35f0 00:36:53.527 [2024-10-12 22:25:11.832092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.839545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7538 00:36:53.527 [2024-10-12 22:25:11.840605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.840621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.848024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f8618 00:36:53.527 [2024-10-12 22:25:11.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.856555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df118 00:36:53.527 [2024-10-12 22:25:11.857608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.857624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.527 [2024-10-12 22:25:11.865083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0a68 00:36:53.527 [2024-10-12 22:25:11.866150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.527 [2024-10-12 22:25:11.866167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.873605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd640 00:36:53.528 [2024-10-12 22:25:11.874665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.874681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.882117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0ff8 00:36:53.528 [2024-10-12 22:25:11.883187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.883203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.890619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eff18 00:36:53.528 [2024-10-12 22:25:11.891642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.891658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.899129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6fa8 00:36:53.528 [2024-10-12 22:25:11.900179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.900195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.907671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5ec8 00:36:53.528 [2024-10-12 22:25:11.908740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.908756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.916198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ef6a8 00:36:53.528 [2024-10-12 22:25:11.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.924705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6890 00:36:53.528 [2024-10-12 22:25:11.925763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.925780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.933238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea248 00:36:53.528 [2024-10-12 22:25:11.934263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.934279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.941728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f96f8 00:36:53.528 [2024-10-12 22:25:11.942792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.942808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.950252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa7d8 00:36:53.528 [2024-10-12 22:25:11.951307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.951323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.958801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f46d0 00:36:53.528 [2024-10-12 22:25:11.959854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.959874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.967350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.528 [2024-10-12 22:25:11.968404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.968420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.975884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e23b8 00:36:53.528 [2024-10-12 22:25:11.976939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.976956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.984410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.528 [2024-10-12 22:25:11.985470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.985486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:11.992914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e9e10 00:36:53.528 [2024-10-12 22:25:11.993983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:11.993999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:12.001532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:53.528 [2024-10-12 22:25:12.002590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:12.002607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.528 [2024-10-12 22:25:12.010062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa3a0 00:36:53.528 [2024-10-12 22:25:12.011120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.528 [2024-10-12 22:25:12.011136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.789 [2024-10-12 22:25:12.018597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:53.789 [2024-10-12 22:25:12.019657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.789 [2024-10-12 22:25:12.019674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.789 [2024-10-12 22:25:12.027140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea680 00:36:53.789 [2024-10-12 22:25:12.028170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.789 [2024-10-12 22:25:12.028186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.789 [2024-10-12 22:25:12.035636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:53.790 [2024-10-12 22:25:12.036714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.036730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.044169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f31b8 00:36:53.790 [2024-10-12 22:25:12.045221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.045238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.052699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:53.790 [2024-10-12 22:25:12.053761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.053777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.061203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dfdc0 00:36:53.790 [2024-10-12 22:25:12.062249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.062266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.069725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:53.790 [2024-10-12 22:25:12.070774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.070790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.078239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0ea0 00:36:53.790 [2024-10-12 22:25:12.079305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.079321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.086900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:53.790 [2024-10-12 22:25:12.087936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.087953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.095464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0bc0 00:36:53.790 [2024-10-12 22:25:12.096531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.096547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.103992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:53.790 [2024-10-12 22:25:12.105066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.105083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.112515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f35f0 00:36:53.790 [2024-10-12 22:25:12.113585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.113601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.121019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7538 00:36:53.790 [2024-10-12 22:25:12.122077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.122093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.129522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f8618 00:36:53.790 [2024-10-12 22:25:12.130575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.130592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.138050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df118 00:36:53.790 [2024-10-12 22:25:12.139114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.139130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.146582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0a68 00:36:53.790 [2024-10-12 22:25:12.147648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.147664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.155086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd640 00:36:53.790 [2024-10-12 22:25:12.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.156166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.163598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0ff8 00:36:53.790 [2024-10-12 22:25:12.164654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.164671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.172122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eff18 00:36:53.790 [2024-10-12 22:25:12.173174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.173190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.180623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6fa8 00:36:53.790 [2024-10-12 22:25:12.181684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.181703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.189140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5ec8 00:36:53.790 [2024-10-12 22:25:12.190186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.190203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.197649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ef6a8 00:36:53.790 [2024-10-12 22:25:12.198705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.198722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.206159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6890 00:36:53.790 [2024-10-12 22:25:12.207225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.207242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.214669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea248 00:36:53.790 [2024-10-12 22:25:12.215717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.215733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.223195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f96f8 00:36:53.790 [2024-10-12 22:25:12.224262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.224279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.231710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa7d8 00:36:53.790 [2024-10-12 22:25:12.232774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.232791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.240232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f46d0 00:36:53.790 [2024-10-12 22:25:12.241295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.241311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.248745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:53.790 [2024-10-12 22:25:12.249797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.249814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.257278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e23b8 00:36:53.790 [2024-10-12 22:25:12.258348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.258367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.265789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:53.790 [2024-10-12 22:25:12.266848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.790 [2024-10-12 22:25:12.266865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:53.790 [2024-10-12 22:25:12.274315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e9e10 00:36:53.790 [2024-10-12 22:25:12.275377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:53.791 [2024-10-12 22:25:12.275394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.282868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:54.052 [2024-10-12 22:25:12.283923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.283939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.291465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa3a0 00:36:54.052 [2024-10-12 22:25:12.292526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.292542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.299990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:54.052 [2024-10-12 22:25:12.301040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.301057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.308503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea680 00:36:54.052 [2024-10-12 22:25:12.309559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.309576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.317011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:54.052 [2024-10-12 22:25:12.318076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.318092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.325554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f31b8 00:36:54.052 [2024-10-12 22:25:12.326608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.326625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.334073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:54.052 [2024-10-12 22:25:12.335146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.335163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.342611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dfdc0 00:36:54.052 [2024-10-12 22:25:12.343678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.343695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.351136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:54.052 [2024-10-12 22:25:12.352202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.352219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.359635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0ea0 00:36:54.052 [2024-10-12 22:25:12.360701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.368295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:54.052 [2024-10-12 22:25:12.369349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.369366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.376844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0bc0 00:36:54.052 [2024-10-12 22:25:12.377881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.377898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.385391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:54.052 [2024-10-12 22:25:12.386457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.386474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.393914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f35f0 00:36:54.052 [2024-10-12 22:25:12.394981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.394998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.402430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7538 00:36:54.052 [2024-10-12 22:25:12.403499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.403516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.410943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f8618 00:36:54.052 [2024-10-12 22:25:12.411994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.412011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.419501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df118 00:36:54.052 [2024-10-12 22:25:12.420559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.420576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.428039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0a68 00:36:54.052 [2024-10-12 22:25:12.429088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.429108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.436569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd640 00:36:54.052 [2024-10-12 22:25:12.437638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.437655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.445093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0ff8 00:36:54.052 [2024-10-12 22:25:12.446109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.446126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.453598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eff18 00:36:54.052 [2024-10-12 22:25:12.454651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.454668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.462123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6fa8 00:36:54.052 [2024-10-12 22:25:12.463178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.052 [2024-10-12 22:25:12.463195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.052 [2024-10-12 22:25:12.470666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5ec8 00:36:54.052 [2024-10-12 22:25:12.471719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.471735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.479202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ef6a8 00:36:54.053 [2024-10-12 22:25:12.480255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.480274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.487738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6890 00:36:54.053 [2024-10-12 22:25:12.488792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.488809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.496273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea248 00:36:54.053 [2024-10-12 22:25:12.497340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.497357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.504790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f96f8 00:36:54.053 [2024-10-12 22:25:12.505840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.505857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.513359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa7d8 00:36:54.053 [2024-10-12 22:25:12.514377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.514393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.521897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f46d0 00:36:54.053 [2024-10-12 22:25:12.522963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.522980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.053 [2024-10-12 22:25:12.530422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6300 00:36:54.053 [2024-10-12 22:25:12.531494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.053 [2024-10-12 22:25:12.531510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.538949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e23b8 00:36:54.314 [2024-10-12 22:25:12.540016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.314 [2024-10-12 22:25:12.540033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.547467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e4578 00:36:54.314 [2024-10-12 22:25:12.548541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.314 [2024-10-12 22:25:12.548558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.556015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e9e10 00:36:54.314 [2024-10-12 22:25:12.557067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.314 [2024-10-12 22:25:12.557084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.564569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f92c0 00:36:54.314 [2024-10-12 22:25:12.565636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.314 [2024-10-12 22:25:12.565653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.573088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa3a0 00:36:54.314 [2024-10-12 22:25:12.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.314 [2024-10-12 22:25:12.574170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.314 [2024-10-12 22:25:12.581609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fb480 00:36:54.314 [2024-10-12 22:25:12.582661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.582678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.590149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea680 00:36:54.315 [2024-10-12 22:25:12.591185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.591202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.598661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f4298 00:36:54.315 [2024-10-12 22:25:12.599717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.599734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.607204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f31b8 00:36:54.315 [2024-10-12 22:25:12.608213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.608230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.615740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7970 00:36:54.315 [2024-10-12 22:25:12.616775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.616792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.624258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dfdc0 00:36:54.315 [2024-10-12 22:25:12.625323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.625341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.632788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198dece0 00:36:54.315 [2024-10-12 22:25:12.633853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.633870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.641350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0ea0 00:36:54.315 [2024-10-12 22:25:12.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.642437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.649897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f1ca0 00:36:54.315 [2024-10-12 22:25:12.650958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.650974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.658443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0bc0 00:36:54.315 [2024-10-12 22:25:12.659508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.659524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.666974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e7c50 00:36:54.315 [2024-10-12 22:25:12.668037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.668054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.675501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f35f0 00:36:54.315 [2024-10-12 22:25:12.676544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.676561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.684013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f7538 00:36:54.315 [2024-10-12 22:25:12.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.685109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.692542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f8618 00:36:54.315 [2024-10-12 22:25:12.693600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.693617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.701098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198df118 00:36:54.315 [2024-10-12 22:25:12.702154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.702172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.709618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e0a68 00:36:54.315 [2024-10-12 22:25:12.710667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.710683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.718141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fd640 00:36:54.315 [2024-10-12 22:25:12.719188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.726670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f0ff8 00:36:54.315 [2024-10-12 22:25:12.727737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.727754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.735179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198eff18 00:36:54.315 [2024-10-12 22:25:12.736242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.736259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.743710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e6fa8 00:36:54.315 [2024-10-12 22:25:12.744779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.744796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.752237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198e5ec8 00:36:54.315 [2024-10-12 22:25:12.753281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.753298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.760768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ef6a8 00:36:54.315 [2024-10-12 22:25:12.761910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.761927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.769443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f6890 00:36:54.315 [2024-10-12 22:25:12.770524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.770541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.777974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198ea248 00:36:54.315 [2024-10-12 22:25:12.779046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.779063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.786513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f96f8 00:36:54.315 [2024-10-12 22:25:12.787579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.787596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.315 [2024-10-12 22:25:12.795101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198fa7d8 00:36:54.315 [2024-10-12 22:25:12.796155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.315 [2024-10-12 22:25:12.796172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.576 [2024-10-12 22:25:12.803629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881550) with pdu=0x2000198f46d0 00:36:54.577 [2024-10-12 22:25:12.804681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:54.577 [2024-10-12 22:25:12.804698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:54.577 29895.00 IOPS, 116.78 MiB/s 00:36:54.577 Latency(us) 00:36:54.577 [2024-10-12T20:25:13.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.577 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:54.577 nvme0n1 : 2.00 29904.50 116.81 0.00 0.00 4275.40 1829.55 9284.27 00:36:54.577 [2024-10-12T20:25:13.066Z] =================================================================================================================== 00:36:54.577 [2024-10-12T20:25:13.066Z] Total : 29904.50 116.81 0.00 0.00 4275.40 1829.55 9284.27 00:36:54.577 { 00:36:54.577 "results": [ 00:36:54.577 { 00:36:54.577 "job": "nvme0n1", 00:36:54.577 "core_mask": "0x2", 00:36:54.577 "workload": "randwrite", 00:36:54.577 "status": "finished", 00:36:54.577 "queue_depth": 128, 00:36:54.577 "io_size": 4096, 00:36:54.577 "runtime": 2.003645, 00:36:54.577 "iops": 29904.4990504805, 00:36:54.577 "mibps": 116.81444941593945, 00:36:54.577 "io_failed": 0, 00:36:54.577 "io_timeout": 0, 00:36:54.577 "avg_latency_us": 4275.395225029763, 00:36:54.577 "min_latency_us": 1829.5466666666666, 00:36:54.577 "max_latency_us": 9284.266666666666 00:36:54.577 } 00:36:54.577 ], 00:36:54.577 "core_count": 1 00:36:54.577 } 00:36:54.577 22:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:54.577 22:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:54.577 22:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:54.577 | .driver_specific 00:36:54.577 | .nvme_error 00:36:54.577 | .status_code 00:36:54.577 | .command_transient_transport_error' 00:36:54.577 22:25:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 234 > 0 )) 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3764737 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3764737 ']' 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3764737 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.577 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3764737 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3764737' 00:36:54.838 killing process with pid 3764737 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3764737 00:36:54.838 Received shutdown signal, test time was about 2.000000 seconds 00:36:54.838 00:36:54.838 Latency(us) 00:36:54.838 [2024-10-12T20:25:13.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.838 [2024-10-12T20:25:13.327Z] =================================================================================================================== 00:36:54.838 [2024-10-12T20:25:13.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3764737 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3765522 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3765522 /var/tmp/bperf.sock 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3765522 ']' 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:54.838 22:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:54.838 [2024-10-12 22:25:13.239741] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:54.838 [2024-10-12 22:25:13.239797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765522 ] 00:36:54.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:54.838 Zero copy mechanism will not be used. 00:36:54.838 [2024-10-12 22:25:13.316794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.099 [2024-10-12 22:25:13.343533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.670 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:55.670 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:36:55.670 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.670 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:55.930 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:56.191 nvme0n1 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:56.191 22:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:56.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:56.452 Zero copy mechanism will not be used. 00:36:56.452 Running I/O for 2 seconds... 00:36:56.452 [2024-10-12 22:25:14.756273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.452 [2024-10-12 22:25:14.756642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.452 [2024-10-12 22:25:14.756671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.452 [2024-10-12 22:25:14.767308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.452 [2024-10-12 22:25:14.767629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.452 [2024-10-12 22:25:14.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.452 [2024-10-12 22:25:14.777024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.452 [2024-10-12 22:25:14.777416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.452 [2024-10-12 22:25:14.777435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.452 [2024-10-12 22:25:14.786553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.452 [2024-10-12 22:25:14.786830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.452 [2024-10-12 22:25:14.786849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.452 [2024-10-12 22:25:14.794748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.452 [2024-10-12 22:25:14.795050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.452 [2024-10-12 22:25:14.795068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.452 [2024-10-12 22:25:14.804979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.805174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.805191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.815613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.815948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.815967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.826863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.827066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.827084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.837853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.838077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.838094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.848559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.848849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.848867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.859502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.859800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.859817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.869145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.869455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.869472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.878638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.878969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.878989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.888539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.888831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.888849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.898009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.898315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.906150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.906440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.906459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.915396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.915587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.915604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.922655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.922856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.922873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.928029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.928235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.928252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.453 [2024-10-12 22:25:14.934661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.453 [2024-10-12 22:25:14.934898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.453 [2024-10-12 22:25:14.934914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.715 [2024-10-12 22:25:14.942073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.715 [2024-10-12 22:25:14.942482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.715 [2024-10-12 22:25:14.942500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.715 [2024-10-12 22:25:14.951604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.715 [2024-10-12 22:25:14.951930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.715 [2024-10-12 22:25:14.951947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.715 [2024-10-12 22:25:14.961218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.715 [2024-10-12 22:25:14.961519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.715 [2024-10-12 22:25:14.961536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.715 [2024-10-12 22:25:14.970594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.715 [2024-10-12 22:25:14.970995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.715 [2024-10-12 22:25:14.971013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.715 [2024-10-12 22:25:14.977916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:14.978247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:14.978265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:14.987325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:14.987609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:14.987626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:14.997535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:14.997783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:14.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.005188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.005376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.005392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.012174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.012495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.012513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.018242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.018433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.018450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.023931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.024126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.024143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.028787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.029112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.034656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.035022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.035039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.040268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.040563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.040580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.048892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.049230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.049248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.059226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.059545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.059563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.068507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.068750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.068767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.079455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.079679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.079696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.090668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.091003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.091024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.101665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.102022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.102040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.113468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.113670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.113685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.125407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.125668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.125684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.137303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.137636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.147016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.147472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.157871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.158142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.158159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.169535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.169847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.169865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.179302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.179533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.179549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.187756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.188092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.188114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.716 [2024-10-12 22:25:15.196877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.716 [2024-10-12 22:25:15.197065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.716 [2024-10-12 22:25:15.197082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.205002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.205206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.205224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.213185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.213597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.221604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.221793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.221809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.229387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.229575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.229592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.238880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.239232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.239250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.246632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.246936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.246954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.254990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.255322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.255340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.264521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.979 [2024-10-12 22:25:15.264847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.979 [2024-10-12 22:25:15.270927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.979 [2024-10-12 22:25:15.271175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.271192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.276248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.276555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.283126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.283335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.289439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.289630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.289646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.298442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.298733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.298751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.305996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.306282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.306300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.315338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.315529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.315546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.325842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.326032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.326050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.337726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.337984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.338001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.349272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.349480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.349496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.359716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.360006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.360024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.371058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.371273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.371290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.382534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.382760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.382777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.394599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.394822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.394839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.405709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.405927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.405944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.417159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.417404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.417422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.427982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.428211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.439273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.439526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.439544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.450843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.451126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.451143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:56.980 [2024-10-12 22:25:15.461639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:56.980 [2024-10-12 22:25:15.461891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:56.980 [2024-10-12 22:25:15.461908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.242 [2024-10-12 22:25:15.473265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.242 [2024-10-12 22:25:15.473510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.242 [2024-10-12 22:25:15.473526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.242 [2024-10-12 22:25:15.483274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.242 [2024-10-12 22:25:15.483467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.242 [2024-10-12 22:25:15.483484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.242 [2024-10-12 22:25:15.494955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.242 [2024-10-12 22:25:15.495188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.242 [2024-10-12 22:25:15.495204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.242 [2024-10-12 22:25:15.505949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.506244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.506262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.514027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.514338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.514359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.523854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.524055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.524072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.530877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.531134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.539901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.540093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.540115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.550334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.550534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.550550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.561708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.561907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.561924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.573069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.573357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.573374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.584465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.584698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.584724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.596217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.596460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.596478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.608152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.608424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.608441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.619669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.619889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.619906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.631677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.631918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.631935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.643666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.644057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.644074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.654646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.654982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.666038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.666302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.666318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.678173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.678386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.678402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.689976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.690170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.701806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.702133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.702150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.714244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.714506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.714524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.243 [2024-10-12 22:25:15.726078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.243 [2024-10-12 22:25:15.726326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.243 [2024-10-12 22:25:15.726345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.737290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.737505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.737521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.506 3176.00 IOPS, 397.00 MiB/s [2024-10-12T20:25:15.995Z] [2024-10-12 22:25:15.749107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.749408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.749426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.761165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.761385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.761402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.772203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.772456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.772473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.783670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.783888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.783905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.794794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.795015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.795030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.805560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.805907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.805927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.816801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.817030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.817046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.826924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.827205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.827222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.838654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.838947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.838964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.847818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.848087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.848110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.857217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.857407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.857423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.863436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.863625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.863641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.872878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.873164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.873184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.882324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.882615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.882633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.891978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.892306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.506 [2024-10-12 22:25:15.892324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.506 [2024-10-12 22:25:15.901961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.506 [2024-10-12 22:25:15.902286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.902304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.909973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.910180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.910196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.918315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.918610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.918627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.929221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.929556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.929573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.936977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.937276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.937294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.946445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.946635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.946652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.954268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.954459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.954476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.962059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.962261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.962278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.970735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.970925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.970942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.978823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.979022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.979038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.984643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.984979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.984996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.507 [2024-10-12 22:25:15.990998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.507 [2024-10-12 22:25:15.991195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.507 [2024-10-12 22:25:15.991211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.769 [2024-10-12 22:25:15.997273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.769 [2024-10-12 22:25:15.997711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.769 [2024-10-12 22:25:15.997729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.769 [2024-10-12 22:25:16.004748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.004937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.004954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.013670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.013909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.013927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.022331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.022654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.031285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.031595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.031615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.039774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.040113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.040130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.046975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.047171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.047189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.052805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.060845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.061183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.061200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.068936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.069253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.069270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.076066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.076379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.076397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.083637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.083963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.083981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.093298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.093580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.093597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.103130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.103417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.103436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.111789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.112123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.120737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.121051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.129212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.129577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.129594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.136113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.136305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.136321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.145346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.145665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.145682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.150412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.150590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.150606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.154634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.154825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.154841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.162606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.162928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.171028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.171318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.171335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.178216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.178572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.178590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.186835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.187120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.187138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.193713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.194019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.194036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.201593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.201800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.201817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.211521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.211797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.211815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.218910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.219223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.770 [2024-10-12 22:25:16.219240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:57.770 [2024-10-12 22:25:16.226907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.770 [2024-10-12 22:25:16.226971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.771 [2024-10-12 22:25:16.226986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:57.771 [2024-10-12 22:25:16.234756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.771 [2024-10-12 22:25:16.235040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.771 [2024-10-12 22:25:16.235059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:57.771 [2024-10-12 22:25:16.242473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.771 [2024-10-12 22:25:16.242529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.771 [2024-10-12 22:25:16.242543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:57.771 [2024-10-12 22:25:16.248128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:57.771 [2024-10-12 22:25:16.248173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:57.771 [2024-10-12 22:25:16.248188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.258784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.258876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.258891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.268570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.268848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.268864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.277171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.277236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.277251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.286641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.286703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.286718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.295659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.295891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.295906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.303363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.303410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.303425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.309740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.309809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.317195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.317258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.317273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.324424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.324757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.332890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.332953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.332968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.339935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.339980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.339996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.344912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.344973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.344988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.349305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.349360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.357237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.357290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.357305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.363269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.363314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.363329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.368216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.368397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.368413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.375505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.375743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.375760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.380013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.380067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.380082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.387453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.387520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.387536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.393661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.393733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.393748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.398757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.398798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.398813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.403349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.403397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.403412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.410522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.410569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.410585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.418362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.418426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.418444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.424340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.424389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.424405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.429059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.429135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.034 [2024-10-12 22:25:16.429150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.034 [2024-10-12 22:25:16.433358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.034 [2024-10-12 22:25:16.433415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.433430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.437018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.437062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.437077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.441067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.441133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.441148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.444791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.444835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.444851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.448896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.448944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.448959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.452505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.452549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.452565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.456455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.456503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.456518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.460175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.460223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.460238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.463880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.463928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.463943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.467863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.467907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.467923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.471183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.471237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.471252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.474351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.474408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.474423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.479297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.479582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.479598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.488323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.488377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.488392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.497896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.497953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.497971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.507876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.508151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.508166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.035 [2024-10-12 22:25:16.517941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.035 [2024-10-12 22:25:16.518237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.035 [2024-10-12 22:25:16.518253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.529419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.529498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.529513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.539432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.539747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.539762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.550248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.550574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.550590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.560097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.560356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.560372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.566941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.566997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.567013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.570529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.570579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.570594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.574240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.574300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.574315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.578501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.578547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.578563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.582707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.582755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.582770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.586584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.586640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.586656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.590456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.590500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.590515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.594875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.594935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.594950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.603093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.603157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.603173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.611026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.611072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.611087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.614696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.614741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.614756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.618722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.618770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.618785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.624415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.624500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.624514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.630312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.630373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.630389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.637417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.637494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.647160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.647469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.647485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.651890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.651934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.651949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.656079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.656135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.656150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.659907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.659960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.659975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.665532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.665618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.665636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.669872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.669930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.669945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.673856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.673920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.673935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.298 [2024-10-12 22:25:16.677726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.298 [2024-10-12 22:25:16.677777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.298 [2024-10-12 22:25:16.677792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.681243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.681288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.681303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.684571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.684615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.688661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.688711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.688726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.692392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.692438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.692453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.695880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.695930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.695945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.699691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.699740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.699755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.703454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.703498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.703513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.708272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.708340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.708355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.712220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.712265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.712280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.715544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.715597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.715613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.719476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.719535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.719550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.726379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.726426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.731416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.731462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.731477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.735280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.735325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.735340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.738652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.738698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.738713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:58.299 [2024-10-12 22:25:16.742998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.743064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.743080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:58.299 3850.00 IOPS, 481.25 MiB/s [2024-10-12T20:25:16.788Z] [2024-10-12 22:25:16.749428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x881890) with pdu=0x2000198fef90 00:36:58.299 [2024-10-12 22:25:16.749474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:58.299 [2024-10-12 22:25:16.749489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:58.299 00:36:58.299 Latency(us) 00:36:58.299 [2024-10-12T20:25:16.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:58.299 nvme0n1 : 2.01 3849.52 481.19 0.00 0.00 4149.49 1460.91 12834.13 00:36:58.299 [2024-10-12T20:25:16.788Z] =================================================================================================================== 00:36:58.299 [2024-10-12T20:25:16.788Z] Total : 3849.52 481.19 0.00 0.00 4149.49 1460.91 12834.13 00:36:58.299 { 00:36:58.299 "results": [ 00:36:58.299 { 00:36:58.299 "job": "nvme0n1", 00:36:58.299 "core_mask": "0x2", 00:36:58.299 "workload": "randwrite", 00:36:58.299 "status": "finished", 00:36:58.299 "queue_depth": 16, 00:36:58.299 "io_size": 131072, 00:36:58.299 "runtime": 2.005444, 00:36:58.299 "iops": 3849.521602198815, 00:36:58.299 "mibps": 481.19020027485186, 00:36:58.299 "io_failed": 0, 00:36:58.299 "io_timeout": 0, 00:36:58.299 "avg_latency_us": 4149.493388601036, 00:36:58.299 "min_latency_us": 1460.9066666666668, 00:36:58.299 "max_latency_us": 12834.133333333333 00:36:58.299 } 00:36:58.299 ], 00:36:58.299 "core_count": 1 00:36:58.299 } 00:36:58.299 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:58.299 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:58.299 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:58.299 | .driver_specific 00:36:58.299 | .nvme_error 00:36:58.299 | .status_code 00:36:58.299 | .command_transient_transport_error' 00:36:58.299 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 249 > 0 )) 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3765522 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3765522 ']' 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3765522 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.560 22:25:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3765522 00:36:58.560 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:58.560 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:58.560 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3765522' 00:36:58.560 killing process with pid 3765522 00:36:58.560 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3765522 00:36:58.560 Received shutdown signal, test time was about 2.000000 seconds 00:36:58.560 00:36:58.560 Latency(us) 00:36:58.560 [2024-10-12T20:25:17.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.560 [2024-10-12T20:25:17.049Z] =================================================================================================================== 00:36:58.560 [2024-10-12T20:25:17.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.560 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3765522 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3763121 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3763121 ']' 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3763121 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3763121 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3763121' 00:36:58.822 killing process with pid 3763121 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3763121 00:36:58.822 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3763121 00:36:59.082 00:36:59.082 real 0m16.098s 00:36:59.082 user 0m31.909s 00:36:59.082 sys 0m3.491s 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:59.083 ************************************ 00:36:59.083 END TEST nvmf_digest_error 00:36:59.083 ************************************ 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.083 rmmod nvme_tcp 00:36:59.083 rmmod nvme_fabrics 00:36:59.083 rmmod nvme_keyring 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 3763121 ']' 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 3763121 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3763121 ']' 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3763121 00:36:59.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3763121) - No such process 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3763121 is not found' 00:36:59.083 Process with pid 3763121 is not found 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.083 22:25:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:01.627 00:37:01.627 real 0m41.199s 00:37:01.627 user 1m3.964s 00:37:01.627 sys 0m13.023s 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.627 ************************************ 00:37:01.627 END TEST nvmf_digest 00:37:01.627 ************************************ 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.627 ************************************ 00:37:01.627 START TEST nvmf_bdevperf 00:37:01.627 ************************************ 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:01.627 * Looking for test storage... 00:37:01.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.627 --rc genhtml_branch_coverage=1 00:37:01.627 --rc genhtml_function_coverage=1 00:37:01.627 --rc genhtml_legend=1 00:37:01.627 --rc geninfo_all_blocks=1 00:37:01.627 --rc geninfo_unexecuted_blocks=1 00:37:01.627 00:37:01.627 ' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.627 --rc genhtml_branch_coverage=1 00:37:01.627 --rc genhtml_function_coverage=1 00:37:01.627 --rc genhtml_legend=1 00:37:01.627 --rc geninfo_all_blocks=1 00:37:01.627 --rc geninfo_unexecuted_blocks=1 00:37:01.627 00:37:01.627 ' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.627 --rc genhtml_branch_coverage=1 00:37:01.627 --rc genhtml_function_coverage=1 00:37:01.627 --rc genhtml_legend=1 00:37:01.627 --rc geninfo_all_blocks=1 00:37:01.627 --rc geninfo_unexecuted_blocks=1 00:37:01.627 00:37:01.627 ' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:01.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.627 --rc genhtml_branch_coverage=1 00:37:01.627 --rc genhtml_function_coverage=1 00:37:01.627 --rc genhtml_legend=1 00:37:01.627 --rc geninfo_all_blocks=1 00:37:01.627 --rc geninfo_unexecuted_blocks=1 00:37:01.627 00:37:01.627 ' 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.627 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:01.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.628 22:25:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:09.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:09.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:09.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:09.770 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:09.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # is_hw=yes 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:09.771 22:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:09.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:09.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:37:09.771 00:37:09.771 --- 10.0.0.2 ping statistics --- 00:37:09.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.771 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:09.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:09.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:37:09.771 00:37:09.771 --- 10.0.0.1 ping statistics --- 00:37:09.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:09.771 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # return 0 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3770368 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3770368 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3770368 ']' 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:09.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:09.771 22:25:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 [2024-10-12 22:25:27.219162] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:09.771 [2024-10-12 22:25:27.219232] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:09.771 [2024-10-12 22:25:27.313651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:09.771 [2024-10-12 22:25:27.361349] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:09.771 [2024-10-12 22:25:27.361410] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:09.771 [2024-10-12 22:25:27.361418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:09.771 [2024-10-12 22:25:27.361425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:09.771 [2024-10-12 22:25:27.361432] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:09.771 [2024-10-12 22:25:27.361590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:09.771 [2024-10-12 22:25:27.361738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.771 [2024-10-12 22:25:27.361738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 [2024-10-12 22:25:28.096725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 Malloc0 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.771 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.771 [2024-10-12 22:25:28.166552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:09.772 { 00:37:09.772 "params": { 00:37:09.772 "name": "Nvme$subsystem", 00:37:09.772 "trtype": "$TEST_TRANSPORT", 00:37:09.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.772 "adrfam": "ipv4", 00:37:09.772 "trsvcid": "$NVMF_PORT", 00:37:09.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.772 "hdgst": ${hdgst:-false}, 00:37:09.772 "ddgst": ${ddgst:-false} 00:37:09.772 }, 00:37:09.772 "method": "bdev_nvme_attach_controller" 00:37:09.772 } 00:37:09.772 EOF 00:37:09.772 )") 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:37:09.772 22:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:09.772 "params": { 00:37:09.772 "name": "Nvme1", 00:37:09.772 "trtype": "tcp", 00:37:09.772 "traddr": "10.0.0.2", 00:37:09.772 "adrfam": "ipv4", 00:37:09.772 "trsvcid": "4420", 00:37:09.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:09.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:09.772 "hdgst": false, 00:37:09.772 "ddgst": false 00:37:09.772 }, 00:37:09.772 "method": "bdev_nvme_attach_controller" 00:37:09.772 }' 00:37:09.772 [2024-10-12 22:25:28.224969] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:09.772 [2024-10-12 22:25:28.225040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770573 ] 00:37:10.032 [2024-10-12 22:25:28.307956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.032 [2024-10-12 22:25:28.354423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.032 Running I/O for 1 seconds... 00:37:11.416 8670.00 IOPS, 33.87 MiB/s 00:37:11.416 Latency(us) 00:37:11.416 [2024-10-12T20:25:29.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:11.416 Verification LBA range: start 0x0 length 0x4000 00:37:11.416 Nvme1n1 : 1.01 8673.83 33.88 0.00 0.00 14693.04 2621.44 13489.49 00:37:11.416 [2024-10-12T20:25:29.905Z] =================================================================================================================== 00:37:11.416 [2024-10-12T20:25:29.905Z] Total : 8673.83 33.88 0.00 0.00 14693.04 2621.44 13489.49 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3770913 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # config=() 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # local subsystem config 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:37:11.416 { 00:37:11.416 "params": { 00:37:11.416 "name": "Nvme$subsystem", 00:37:11.416 "trtype": "$TEST_TRANSPORT", 00:37:11.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.416 "adrfam": "ipv4", 00:37:11.416 "trsvcid": "$NVMF_PORT", 00:37:11.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.416 "hdgst": ${hdgst:-false}, 00:37:11.416 "ddgst": ${ddgst:-false} 00:37:11.416 }, 00:37:11.416 "method": "bdev_nvme_attach_controller" 00:37:11.416 } 00:37:11.416 EOF 00:37:11.416 )") 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@578 -- # cat 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # jq . 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@581 -- # IFS=, 00:37:11.416 22:25:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:37:11.416 "params": { 00:37:11.416 "name": "Nvme1", 00:37:11.416 "trtype": "tcp", 00:37:11.416 "traddr": "10.0.0.2", 00:37:11.416 "adrfam": "ipv4", 00:37:11.416 "trsvcid": "4420", 00:37:11.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:11.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:11.416 "hdgst": false, 00:37:11.417 "ddgst": false 00:37:11.417 }, 00:37:11.417 "method": "bdev_nvme_attach_controller" 00:37:11.417 }' 00:37:11.417 [2024-10-12 22:25:29.733491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:11.417 [2024-10-12 22:25:29.733568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770913 ] 00:37:11.417 [2024-10-12 22:25:29.815288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.417 [2024-10-12 22:25:29.860446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.676 Running I/O for 15 seconds... 00:37:13.999 11026.00 IOPS, 43.07 MiB/s [2024-10-12T20:25:32.751Z] 11032.50 IOPS, 43.10 MiB/s [2024-10-12T20:25:32.751Z] 22:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3770368 00:37:14.262 22:25:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:14.262 [2024-10-12 22:25:32.700554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.262 [2024-10-12 22:25:32.700596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.262 [2024-10-12 22:25:32.700782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.262 [2024-10-12 22:25:32.700800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.700977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.700987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.701984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.701999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.702026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.702081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.702115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.263 [2024-10-12 22:25:32.702144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.263 [2024-10-12 22:25:32.702157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.702979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.702992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.264 [2024-10-12 22:25:32.703275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.264 [2024-10-12 22:25:32.703304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.264 [2024-10-12 22:25:32.703319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.264 [2024-10-12 22:25:32.703331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.703946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.703974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.703990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.265 [2024-10-12 22:25:32.704175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.704230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.704261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.704289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.265 [2024-10-12 22:25:32.704316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e37e0 is same with the state(6) to be set 00:37:14.265 [2024-10-12 22:25:32.704346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:14.265 [2024-10-12 22:25:32.704356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:14.265 [2024-10-12 22:25:32.704368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:37:14.265 [2024-10-12 22:25:32.704380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704429] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5e37e0 was disconnected and freed. reset controller. 00:37:14.265 [2024-10-12 22:25:32.704493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.265 [2024-10-12 22:25:32.704509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.265 [2024-10-12 22:25:32.704536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.265 [2024-10-12 22:25:32.704562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.265 [2024-10-12 22:25:32.704588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.265 [2024-10-12 22:25:32.704600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.266 [2024-10-12 22:25:32.709298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.266 [2024-10-12 22:25:32.709331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.266 [2024-10-12 22:25:32.710159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.266 [2024-10-12 22:25:32.710189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.266 [2024-10-12 22:25:32.710202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.266 [2024-10-12 22:25:32.710454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.266 [2024-10-12 22:25:32.710685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.266 [2024-10-12 22:25:32.710696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.266 [2024-10-12 22:25:32.710713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.266 [2024-10-12 22:25:32.714253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.266 [2024-10-12 22:25:32.723351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.266 [2024-10-12 22:25:32.723965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.266 [2024-10-12 22:25:32.724004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.266 [2024-10-12 22:25:32.724018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.266 [2024-10-12 22:25:32.724294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.266 [2024-10-12 22:25:32.724529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.266 [2024-10-12 22:25:32.724540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.266 [2024-10-12 22:25:32.724553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.266 [2024-10-12 22:25:32.728080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.266 [2024-10-12 22:25:32.737182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.266 [2024-10-12 22:25:32.737774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.266 [2024-10-12 22:25:32.737796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.266 [2024-10-12 22:25:32.737808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.266 [2024-10-12 22:25:32.738054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.266 [2024-10-12 22:25:32.738294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.266 [2024-10-12 22:25:32.738305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.266 [2024-10-12 22:25:32.738316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.266 [2024-10-12 22:25:32.741852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.528 [2024-10-12 22:25:32.750947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.528 [2024-10-12 22:25:32.751607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.528 [2024-10-12 22:25:32.751648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.528 [2024-10-12 22:25:32.751663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.528 [2024-10-12 22:25:32.751930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.528 [2024-10-12 22:25:32.752169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.528 [2024-10-12 22:25:32.752181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.528 [2024-10-12 22:25:32.752193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.528 [2024-10-12 22:25:32.755719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.528 [2024-10-12 22:25:32.764996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.528 [2024-10-12 22:25:32.765616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.528 [2024-10-12 22:25:32.765638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.528 [2024-10-12 22:25:32.765650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.528 [2024-10-12 22:25:32.765894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.528 [2024-10-12 22:25:32.766130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.528 [2024-10-12 22:25:32.766141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.528 [2024-10-12 22:25:32.766153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.528 [2024-10-12 22:25:32.769669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.528 [2024-10-12 22:25:32.778764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.528 [2024-10-12 22:25:32.779324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.528 [2024-10-12 22:25:32.779346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.528 [2024-10-12 22:25:32.779358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.528 [2024-10-12 22:25:32.779601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.528 [2024-10-12 22:25:32.779833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.528 [2024-10-12 22:25:32.779843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.528 [2024-10-12 22:25:32.779854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.528 [2024-10-12 22:25:32.783383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.528 [2024-10-12 22:25:32.792672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.528 [2024-10-12 22:25:32.793360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.528 [2024-10-12 22:25:32.793405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.528 [2024-10-12 22:25:32.793420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.528 [2024-10-12 22:25:32.793690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.528 [2024-10-12 22:25:32.793925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.528 [2024-10-12 22:25:32.793936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.528 [2024-10-12 22:25:32.793948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.528 [2024-10-12 22:25:32.797485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.528 [2024-10-12 22:25:32.806579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.528 [2024-10-12 22:25:32.807165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.528 [2024-10-12 22:25:32.807211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.528 [2024-10-12 22:25:32.807227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.528 [2024-10-12 22:25:32.807496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.807736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.807747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.807759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.811308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.820402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.820991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.821017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.821029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.821284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.821517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.821528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.821539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.825072] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.834170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.834810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.834862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.834878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.835163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.835399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.835410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.835422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.838959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.848082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.848692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.848717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.848730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.848975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.849213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.849224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.849236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.852778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.861881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.862502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.862527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.862540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.862785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.863015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.863027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.863039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.866581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.875686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.876383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.876435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.876451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.876727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.876963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.876975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.876987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.880551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.889473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.890099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.890135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.890147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.890397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.890629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.890640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.890652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.894187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.903295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.903965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.904021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.904045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.904333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.904570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.904581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.904593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.908146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.917052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.917661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.917691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.917703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.917949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.918191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.918202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.918214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.921754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.930866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.931395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.931424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.931437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.931687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.931920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.931931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.931943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.935500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.944646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.945403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.945468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.529 [2024-10-12 22:25:32.945484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.529 [2024-10-12 22:25:32.945767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.529 [2024-10-12 22:25:32.946005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.529 [2024-10-12 22:25:32.946029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.529 [2024-10-12 22:25:32.946043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.529 [2024-10-12 22:25:32.949609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.529 [2024-10-12 22:25:32.958534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.529 [2024-10-12 22:25:32.959378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.529 [2024-10-12 22:25:32.959442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.530 [2024-10-12 22:25:32.959458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.530 [2024-10-12 22:25:32.959751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.530 [2024-10-12 22:25:32.959998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.530 [2024-10-12 22:25:32.960009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.530 [2024-10-12 22:25:32.960024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.530 [2024-10-12 22:25:32.963595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.530 [2024-10-12 22:25:32.972382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.530 [2024-10-12 22:25:32.973051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.530 [2024-10-12 22:25:32.973081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.530 [2024-10-12 22:25:32.973095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.530 [2024-10-12 22:25:32.973357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.530 [2024-10-12 22:25:32.973593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.530 [2024-10-12 22:25:32.973604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.530 [2024-10-12 22:25:32.973616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.530 [2024-10-12 22:25:32.977162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.530 [2024-10-12 22:25:32.986318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.530 [2024-10-12 22:25:32.986937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.530 [2024-10-12 22:25:32.986965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.530 [2024-10-12 22:25:32.986978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.530 [2024-10-12 22:25:32.987235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.530 [2024-10-12 22:25:32.987468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.530 [2024-10-12 22:25:32.987479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.530 [2024-10-12 22:25:32.987490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.530 [2024-10-12 22:25:32.991040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.530 [2024-10-12 22:25:33.000091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.530 [2024-10-12 22:25:33.000830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.530 [2024-10-12 22:25:33.000893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.530 [2024-10-12 22:25:33.000911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.530 [2024-10-12 22:25:33.001211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.530 [2024-10-12 22:25:33.001450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.530 [2024-10-12 22:25:33.001462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.530 [2024-10-12 22:25:33.001474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.530 [2024-10-12 22:25:33.005024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.530 [2024-10-12 22:25:33.013948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.792 [2024-10-12 22:25:33.014562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.792 [2024-10-12 22:25:33.014596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.792 [2024-10-12 22:25:33.014609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.792 [2024-10-12 22:25:33.014858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.792 [2024-10-12 22:25:33.015093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.792 [2024-10-12 22:25:33.015113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.792 [2024-10-12 22:25:33.015126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.792 [2024-10-12 22:25:33.018666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.792 [2024-10-12 22:25:33.027790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.792 [2024-10-12 22:25:33.028480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.792 [2024-10-12 22:25:33.028545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.792 [2024-10-12 22:25:33.028563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.792 [2024-10-12 22:25:33.028848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.792 [2024-10-12 22:25:33.029085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.792 [2024-10-12 22:25:33.029096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.792 [2024-10-12 22:25:33.029124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.792 [2024-10-12 22:25:33.032672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.792 [2024-10-12 22:25:33.041602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.792 [2024-10-12 22:25:33.042233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.792 [2024-10-12 22:25:33.042297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.792 [2024-10-12 22:25:33.042314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.792 [2024-10-12 22:25:33.042608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.792 [2024-10-12 22:25:33.042845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.792 [2024-10-12 22:25:33.042856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.042868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.046430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.055556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.056220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.056285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.056301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.056585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.056822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.056833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.056846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.060411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.069326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.070054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.070127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.070146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.070430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.070669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.070681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.070694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.074260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.083494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.084215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.084281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.084297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.084583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.084820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.084833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.084854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.088418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.097333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.097985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.098016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.098029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.098303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.098539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.098551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.098564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.102112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.111234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.111943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.112007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.112024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.112321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.112559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.112571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.112584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.116140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.125057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.125775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.125838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.125854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.126152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.126390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.126401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.126414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.129960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.138880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.139548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.139579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.139592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.139843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.140076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.140087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.140099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.143658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.152778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.153474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.153538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.153555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.153843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.154081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.154092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.154112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.157667] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 [2024-10-12 22:25:33.166580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.167342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.167406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.167422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.167706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.167944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.793 [2024-10-12 22:25:33.167955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.793 [2024-10-12 22:25:33.167968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.793 [2024-10-12 22:25:33.171542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.793 9293.00 IOPS, 36.30 MiB/s [2024-10-12T20:25:33.282Z] [2024-10-12 22:25:33.180578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.793 [2024-10-12 22:25:33.181255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.793 [2024-10-12 22:25:33.181320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.793 [2024-10-12 22:25:33.181336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.793 [2024-10-12 22:25:33.181629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.793 [2024-10-12 22:25:33.181869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.181880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.181892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.185460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.194375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.194991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.195050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.195067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.195362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.195600] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.195612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.195624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.199173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.208282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.208977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.209039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.209057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.209367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.209607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.209619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.209633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.213194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.222116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.222852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.222916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.222932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.223228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.223466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.223477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.223497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.227047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.235967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.236711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.236776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.236792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.237078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.237327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.237339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.237352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.240927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.249841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.250555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.250619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.250636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.250921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.251172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.251185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.251198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.254748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.263663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.264404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.264468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.264486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.264770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:14.794 [2024-10-12 22:25:33.265009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:14.794 [2024-10-12 22:25:33.265020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:14.794 [2024-10-12 22:25:33.265033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:14.794 [2024-10-12 22:25:33.268593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:14.794 [2024-10-12 22:25:33.277511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:14.794 [2024-10-12 22:25:33.278221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:14.794 [2024-10-12 22:25:33.278292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:14.794 [2024-10-12 22:25:33.278309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:14.794 [2024-10-12 22:25:33.278596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.278848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.278865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.278878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.282447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.291387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.292092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.292165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.292183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.292466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.292703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.292714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.292726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.296289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.305201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.305826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.305890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.305906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.306206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.306443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.306455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.306467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.310013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.319160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.319867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.319931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.319946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.320240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.320487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.320499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.320511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.324060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.332973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.333696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.333759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.333775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.334061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.334311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.334324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.334336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.337876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.346795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.347502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.347565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.347583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.347869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.348120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.348132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.348146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.351694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.360608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.361224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.361289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.361307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.361593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.361829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.361840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.361852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.365416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.056 [2024-10-12 22:25:33.374551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.056 [2024-10-12 22:25:33.375223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.056 [2024-10-12 22:25:33.375286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.056 [2024-10-12 22:25:33.375301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.056 [2024-10-12 22:25:33.375586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.056 [2024-10-12 22:25:33.375823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.056 [2024-10-12 22:25:33.375834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.056 [2024-10-12 22:25:33.375847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.056 [2024-10-12 22:25:33.379541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.388531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.389166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.389198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.389211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.389462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.389696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.389707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.389719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.393267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.402397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.403083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.403158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.403174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.403461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.403698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.403709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.403722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.407282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.416217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.416821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.416851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.416874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.417133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.417369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.417382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.417395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.420919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.428887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.429565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.429622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.429635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.429846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.430011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.430019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.430028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.432481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.441591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.442112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.442137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.442146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.442319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.442480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.442488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.442496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.444937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.454291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.454873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.454919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.454931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.455139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.455302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.455315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.455324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.457758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.466968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.467608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.467651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.467662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.467858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.468021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.468029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.468038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.470489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.479571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.480216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.480257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.480270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.480466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.480629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.480637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.480645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.483082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.492291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.492925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.492962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.492973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.493170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.493333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.493340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.493349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.495775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.504964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.505452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.505469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.505478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.505648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.505808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.057 [2024-10-12 22:25:33.505815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.057 [2024-10-12 22:25:33.505823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.057 [2024-10-12 22:25:33.508290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.057 [2024-10-12 22:25:33.517616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.057 [2024-10-12 22:25:33.518081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.057 [2024-10-12 22:25:33.518098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.057 [2024-10-12 22:25:33.518113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.057 [2024-10-12 22:25:33.518281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.057 [2024-10-12 22:25:33.518439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.058 [2024-10-12 22:25:33.518446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.058 [2024-10-12 22:25:33.518454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.058 [2024-10-12 22:25:33.520871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.058 [2024-10-12 22:25:33.530206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.058 [2024-10-12 22:25:33.530710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.058 [2024-10-12 22:25:33.530725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.058 [2024-10-12 22:25:33.530734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.058 [2024-10-12 22:25:33.530901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.058 [2024-10-12 22:25:33.531060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.058 [2024-10-12 22:25:33.531068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.058 [2024-10-12 22:25:33.531076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.058 [2024-10-12 22:25:33.533502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.542834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.543316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.543332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.543340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.543511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.543669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.543676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.543684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.546097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.555415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.555914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.555929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.555937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.556109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.556268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.556275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.556282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.558694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.568003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.568461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.568476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.568484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.568651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.568809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.568817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.568824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.571273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.580590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.581084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.581099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.581112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.581280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.581437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.581444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.581456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.583873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.593189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.593696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.593710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.593718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.593885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.594043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.594050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.594058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.596499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.605814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.606291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.606306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.606314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.606481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.606640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.606647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.606655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.609069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.320 [2024-10-12 22:25:33.618412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.320 [2024-10-12 22:25:33.619018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.320 [2024-10-12 22:25:33.619049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.320 [2024-10-12 22:25:33.619060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.320 [2024-10-12 22:25:33.619254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.320 [2024-10-12 22:25:33.619416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.320 [2024-10-12 22:25:33.619424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.320 [2024-10-12 22:25:33.619432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.320 [2024-10-12 22:25:33.621850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.631029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.631622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.631660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.631670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.631855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.632015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.632023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.632031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.634454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.643643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.644097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.644132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.644143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.644331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.644493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.644501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.644509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.646930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.656254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.656853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.656883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.656895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.657078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.657245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.657253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.657262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.659685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.668866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.669427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.669458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.669469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.669653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.669816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.669824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.669833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.672262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.681451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.682055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.682085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.682096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.682288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.682450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.682458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.682466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.684890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.694070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.694678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.694709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.694720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.694904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.695065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.695073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.695081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.697504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.706681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.707317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.707347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.707358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.707545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.707704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.707712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.707720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.710149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.719330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.719951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.719981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.719993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.720186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.720350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.720358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.720366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.722783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.731962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.732562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.732593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.732604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.732790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.732950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.732958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.732966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.735467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.321 [2024-10-12 22:25:33.744660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.321 [2024-10-12 22:25:33.745144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.321 [2024-10-12 22:25:33.745167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.321 [2024-10-12 22:25:33.745176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.321 [2024-10-12 22:25:33.745350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.321 [2024-10-12 22:25:33.745510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.321 [2024-10-12 22:25:33.745517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.321 [2024-10-12 22:25:33.745525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.321 [2024-10-12 22:25:33.747939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.322 [2024-10-12 22:25:33.757256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.322 [2024-10-12 22:25:33.757857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.322 [2024-10-12 22:25:33.757888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.322 [2024-10-12 22:25:33.757902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.322 [2024-10-12 22:25:33.758086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.322 [2024-10-12 22:25:33.758255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.322 [2024-10-12 22:25:33.758264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.322 [2024-10-12 22:25:33.758272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.322 [2024-10-12 22:25:33.760690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.322 [2024-10-12 22:25:33.769903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.322 [2024-10-12 22:25:33.770466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.322 [2024-10-12 22:25:33.770496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.322 [2024-10-12 22:25:33.770507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.322 [2024-10-12 22:25:33.770692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.322 [2024-10-12 22:25:33.770852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.322 [2024-10-12 22:25:33.770859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.322 [2024-10-12 22:25:33.770868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.322 [2024-10-12 22:25:33.773292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.322 [2024-10-12 22:25:33.782483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.322 [2024-10-12 22:25:33.782947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.322 [2024-10-12 22:25:33.782964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.322 [2024-10-12 22:25:33.782973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.322 [2024-10-12 22:25:33.783147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.322 [2024-10-12 22:25:33.783305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.322 [2024-10-12 22:25:33.783313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.322 [2024-10-12 22:25:33.783321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.322 [2024-10-12 22:25:33.785735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.322 [2024-10-12 22:25:33.795051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.322 [2024-10-12 22:25:33.795617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.322 [2024-10-12 22:25:33.795648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.322 [2024-10-12 22:25:33.795659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.322 [2024-10-12 22:25:33.795843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.322 [2024-10-12 22:25:33.796005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.322 [2024-10-12 22:25:33.796017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.322 [2024-10-12 22:25:33.796025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.322 [2024-10-12 22:25:33.798447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.584 [2024-10-12 22:25:33.807660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.584 [2024-10-12 22:25:33.808194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-10-12 22:25:33.808225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.584 [2024-10-12 22:25:33.808236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.584 [2024-10-12 22:25:33.808427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.584 [2024-10-12 22:25:33.808587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.584 [2024-10-12 22:25:33.808595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.584 [2024-10-12 22:25:33.808603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.584 [2024-10-12 22:25:33.811025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.584 [2024-10-12 22:25:33.820352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.584 [2024-10-12 22:25:33.820941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-10-12 22:25:33.820971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.584 [2024-10-12 22:25:33.820982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.584 [2024-10-12 22:25:33.821173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.584 [2024-10-12 22:25:33.821334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.584 [2024-10-12 22:25:33.821341] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.584 [2024-10-12 22:25:33.821349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.584 [2024-10-12 22:25:33.823765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.584 [2024-10-12 22:25:33.832947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.584 [2024-10-12 22:25:33.833556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-10-12 22:25:33.833586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.584 [2024-10-12 22:25:33.833597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.584 [2024-10-12 22:25:33.833782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.833944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.833951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.833959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.836383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.845575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.846203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.846234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.846245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.846434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.846595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.846603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.846611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.849031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.858211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.858719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.858735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.858744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.858911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.859069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.859076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.859084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.861506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.870819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.871375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.871406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.871417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.871603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.871764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.871771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.871780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.874203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.883395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.883860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.883877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.883885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.884060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.884225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.884233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.884240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.886655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.895968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.896523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.896554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.896565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.896748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.896908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.896916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.896924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.899347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.908664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.909214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.909244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.909255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.909441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.909601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.909608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.909617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.912041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.921366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.921802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.921833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.921844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.922029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.922200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.922208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.922221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.924637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.933955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.934566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.934597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.934608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.934792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.934955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.934963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.934971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.937394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.946591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.947203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.947234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.947245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.585 [2024-10-12 22:25:33.947432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.585 [2024-10-12 22:25:33.947592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.585 [2024-10-12 22:25:33.947600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.585 [2024-10-12 22:25:33.947608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.585 [2024-10-12 22:25:33.950032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.585 [2024-10-12 22:25:33.959215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.585 [2024-10-12 22:25:33.959804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-10-12 22:25:33.959834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.585 [2024-10-12 22:25:33.959846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:33.960030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:33.960198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:33.960206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:33.960214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:33.962632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:33.971809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:33.972422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:33.972456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:33.972467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:33.972653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:33.972814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:33.972822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:33.972831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:33.975258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:33.984455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:33.985046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:33.985076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:33.985088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:33.985285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:33.985448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:33.985456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:33.985465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:33.987882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:33.997063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:33.997575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:33.997591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:33.997600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:33.997767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:33.997925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:33.997933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:33.997940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.000360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:34.009687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:34.010157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:34.010174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:34.010183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:34.010352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:34.010514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:34.010522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:34.010530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.013040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:34.022308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:34.022891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:34.022922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:34.022933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:34.023125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:34.023286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:34.023294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:34.023303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.025717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:34.034901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:34.035378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:34.035395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:34.035404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:34.035572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:34.035731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:34.035739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:34.035748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.038166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:34.047504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:34.048003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:34.048018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:34.048027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:34.048200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:34.048360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:34.048368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:34.048376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.050793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.586 [2024-10-12 22:25:34.060119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.586 [2024-10-12 22:25:34.060696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-10-12 22:25:34.060728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.586 [2024-10-12 22:25:34.060739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.586 [2024-10-12 22:25:34.060924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.586 [2024-10-12 22:25:34.061084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.586 [2024-10-12 22:25:34.061092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.586 [2024-10-12 22:25:34.061101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.586 [2024-10-12 22:25:34.063525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.072717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.073230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.073262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.073273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.073464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.073625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.073633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.073642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.076068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.085570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.086139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.086170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.086181] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.086369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.086530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.086538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.086546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.088969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.098162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.098631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.098649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.098661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.098829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.098987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.098995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.099002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.101420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.110773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.111412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.111443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.111454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.111640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.111801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.111809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.111817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.114243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.123433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.124040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.124070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.124081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.124277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.124438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.124446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.124454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.126873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.136060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.136686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.136716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.136727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.136911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.137072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.137083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.137091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.139517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.148720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.149321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.149353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.149364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.149550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.149712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.149719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.149727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.152152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.161347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.161930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.161960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.161971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.162163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.162324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.162331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.162339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 [2024-10-12 22:25:34.164759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.173954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.174567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.174598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.174609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.174794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.174955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.849 [2024-10-12 22:25:34.174963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.849 [2024-10-12 22:25:34.174972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.849 6969.75 IOPS, 27.23 MiB/s [2024-10-12T20:25:34.338Z] [2024-10-12 22:25:34.178531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.849 [2024-10-12 22:25:34.186617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.849 [2024-10-12 22:25:34.187148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.849 [2024-10-12 22:25:34.187171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.849 [2024-10-12 22:25:34.187180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.849 [2024-10-12 22:25:34.187354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.849 [2024-10-12 22:25:34.187515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.187523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.187531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.189951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.199286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.199865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.199896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.199906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.200091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.200259] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.200267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.200276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.202693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.211879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.212352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.212369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.212378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.212545] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.212703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.212710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.212718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.215139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.224499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.225010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.225026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.225035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.225213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.225379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.225386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.225394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.227813] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.237149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.237619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.237635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.237642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.237811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.237972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.237979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.237987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.240409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.249774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.250245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.250260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.250269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.250436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.250594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.250602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.250609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.253030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.262370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.262942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.262973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.262985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.263177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.263339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.263347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.263363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.265780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.274981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.275482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.275499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.275507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.275676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.275834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.275842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.275849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.278273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.287618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.288122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.288138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.288147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.288314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.288472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.288479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.288487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.290905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.300243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.300698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.300713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.300721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.300888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.301047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.301054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.301061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.303481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.312813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.313275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.313290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.313298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.313466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.850 [2024-10-12 22:25:34.313624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.850 [2024-10-12 22:25:34.313632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.850 [2024-10-12 22:25:34.313639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.850 [2024-10-12 22:25:34.316054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:15.850 [2024-10-12 22:25:34.325394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:15.850 [2024-10-12 22:25:34.325963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.850 [2024-10-12 22:25:34.325995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:15.850 [2024-10-12 22:25:34.326006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:15.850 [2024-10-12 22:25:34.326198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:15.851 [2024-10-12 22:25:34.326359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:15.851 [2024-10-12 22:25:34.326368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:15.851 [2024-10-12 22:25:34.326376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:15.851 [2024-10-12 22:25:34.328799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.113 [2024-10-12 22:25:34.337998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.113 [2024-10-12 22:25:34.338490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.113 [2024-10-12 22:25:34.338507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.113 [2024-10-12 22:25:34.338515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.113 [2024-10-12 22:25:34.338685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.113 [2024-10-12 22:25:34.338843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.113 [2024-10-12 22:25:34.338850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.113 [2024-10-12 22:25:34.338859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.113 [2024-10-12 22:25:34.341280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.113 [2024-10-12 22:25:34.350625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.113 [2024-10-12 22:25:34.351086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.113 [2024-10-12 22:25:34.351101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.113 [2024-10-12 22:25:34.351113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.113 [2024-10-12 22:25:34.351282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.113 [2024-10-12 22:25:34.351445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.113 [2024-10-12 22:25:34.351452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.113 [2024-10-12 22:25:34.351460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.113 [2024-10-12 22:25:34.353879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.113 [2024-10-12 22:25:34.363213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.113 [2024-10-12 22:25:34.363715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.363730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.363738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.363905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.364063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.364070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.364078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.366499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.375833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.376305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.376320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.376328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.376496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.376654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.376661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.376669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.379085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.388435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.388933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.388948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.388956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.389129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.389287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.389294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.389302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.391720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.401053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.401658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.401689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.401700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.401885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.402046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.402055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.402063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.404491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.413694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.414085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.414107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.414116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.414284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.414442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.414451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.414458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.416875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.426373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.426877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.426892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.426900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.427068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.427231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.427239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.427247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.429661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.438992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.439508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.439523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.439536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.439702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.439860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.439867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.439875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.442304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.451642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.452138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.452154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.452163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.452330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.452488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.452496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.452503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.454922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.464262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.464765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.464780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.464787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.464954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.465117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.465124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.465132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.467548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.476879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.477390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.477405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.477414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.477581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.477740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.477750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.477759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.114 [2024-10-12 22:25:34.480181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.114 [2024-10-12 22:25:34.489531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.114 [2024-10-12 22:25:34.490041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.114 [2024-10-12 22:25:34.490056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.114 [2024-10-12 22:25:34.490064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.114 [2024-10-12 22:25:34.490241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.114 [2024-10-12 22:25:34.490399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.114 [2024-10-12 22:25:34.490407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.114 [2024-10-12 22:25:34.490415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.492831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.502169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.502668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.502683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.502691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.502860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.503017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.503025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.503033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.505452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.514787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.515258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.515274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.515282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.515450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.515607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.515615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.515622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.518041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.527390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.527894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.527910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.527917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.528085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.528249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.528257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.528265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.530681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.540023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.540525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.540540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.540548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.540717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.540875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.540882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.540890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.543320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.552654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.553252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.553284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.553295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.553483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.553644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.553651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.553659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.556080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.565274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.565859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.565890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.565901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.566090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.566257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.566266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.566274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.568690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.577880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.578441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.578472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.578483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.578668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.578829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.578836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.578844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.581266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.115 [2024-10-12 22:25:34.590471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.115 [2024-10-12 22:25:34.590980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.115 [2024-10-12 22:25:34.590996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.115 [2024-10-12 22:25:34.591004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.115 [2024-10-12 22:25:34.591177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.115 [2024-10-12 22:25:34.591335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.115 [2024-10-12 22:25:34.591343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.115 [2024-10-12 22:25:34.591351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.115 [2024-10-12 22:25:34.593766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.603128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.603511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.603527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.603535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.603704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.603862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.603869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.603881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.606298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.615761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.616377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.616408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.616420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.616607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.616767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.616775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.616783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.619209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.628396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.628903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.628919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.628928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.629097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.629260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.629268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.629275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.631691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.641045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.641424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.641440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.641449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.641618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.641776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.641784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.641791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.644217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.653677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.654143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.654158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.654166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.654333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.654492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.654499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.654506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.656927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.666258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.666748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.666763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.666771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.666938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.667095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.667106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.667114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.669529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.678856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.679444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.679475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.679486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.679673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.679834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.679841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.679849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.682272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.691497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.692125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.692155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.692166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.692354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.692518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.692526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.692534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.694954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.704153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.704757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.704789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.704800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.378 [2024-10-12 22:25:34.704985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.378 [2024-10-12 22:25:34.705152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.378 [2024-10-12 22:25:34.705160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.378 [2024-10-12 22:25:34.705168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.378 [2024-10-12 22:25:34.707583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.378 [2024-10-12 22:25:34.716816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.378 [2024-10-12 22:25:34.717383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.378 [2024-10-12 22:25:34.717413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.378 [2024-10-12 22:25:34.717424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.717614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.717775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.717783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.717791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.720214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.729417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.730018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.730048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.730059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.730251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.730414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.730422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.730431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.732853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.742040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.742675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.742706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.742718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.742903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.743064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.743072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.743080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.745512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.754700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.755231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.755262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.755273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.755464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.755625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.755633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.755642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.758066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.767465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.767932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.767948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.767956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.768129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.768288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.768296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.768304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.770725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.780051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.780651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.780682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.780699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.780884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.781044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.781052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.781061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.783502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.792719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.793318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.793348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.793359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.793546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.793707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.793714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.793723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.796144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.805323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.805885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.805916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.805927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.806120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.806282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.806289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.806297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.808716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.817897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.818406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.818422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.818431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.818600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.818758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.818769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.818777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.821195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.830508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.831112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.831143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.831154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.831342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.831503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.831511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.831519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.833940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.843155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.843674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.843689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.379 [2024-10-12 22:25:34.843698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.379 [2024-10-12 22:25:34.843865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.379 [2024-10-12 22:25:34.844024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.379 [2024-10-12 22:25:34.844031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.379 [2024-10-12 22:25:34.844038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.379 [2024-10-12 22:25:34.846465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.379 [2024-10-12 22:25:34.855824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.379 [2024-10-12 22:25:34.856388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.379 [2024-10-12 22:25:34.856419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.380 [2024-10-12 22:25:34.856430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.380 [2024-10-12 22:25:34.856615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.380 [2024-10-12 22:25:34.856775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.380 [2024-10-12 22:25:34.856783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.380 [2024-10-12 22:25:34.856791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.380 [2024-10-12 22:25:34.859222] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.868420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.868919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.868935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.868943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.869120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.869279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.869287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.869294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.871709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.881030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.881606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.881637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.881648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.881832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.881993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.882000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.882008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.884445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.893629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.894247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.894277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.894288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.894474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.894634] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.894642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.894650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.897070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.906252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.906860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.906891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.906902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.907095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.907263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.907272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.907280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.909697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.918878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.919441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.919471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.919483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.919668] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.919828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.919836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.919844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.922270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.642 [2024-10-12 22:25:34.931451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.642 [2024-10-12 22:25:34.932058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.642 [2024-10-12 22:25:34.932089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.642 [2024-10-12 22:25:34.932100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.642 [2024-10-12 22:25:34.932292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.642 [2024-10-12 22:25:34.932453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.642 [2024-10-12 22:25:34.932460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.642 [2024-10-12 22:25:34.932469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.642 [2024-10-12 22:25:34.934886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:34.944078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:34.944603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:34.944619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:34.944628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:34.944797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:34.944956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:34.944963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:34.944975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:34.947393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:34.956717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:34.957322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:34.957352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:34.957363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:34.957548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:34.957708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:34.957716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:34.957724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:34.960151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:34.969337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:34.969842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:34.969858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:34.969867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:34.970034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:34.970198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:34.970206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:34.970214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:34.972631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:34.981975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:34.982547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:34.982578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:34.982590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:34.982775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:34.982936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:34.982944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:34.982953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:34.985392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:34.994592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:34.995167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:34.995198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:34.995209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:34.995398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:34.995559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:34.995567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:34.995575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:34.998002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.007187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:35.007790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:35.007820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:35.007832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:35.008016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:35.008184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:35.008193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:35.008201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:35.010620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.019804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:35.020403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:35.020433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:35.020444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:35.020628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:35.020790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:35.020797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:35.020806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:35.023229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.032416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:35.033001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:35.033032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:35.033043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:35.033237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:35.033403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:35.033411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:35.033419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:35.035840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.045036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:35.045519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:35.045536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:35.045544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:35.045713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:35.045871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:35.045878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:35.045886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:35.048303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.057644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.643 [2024-10-12 22:25:35.058262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.643 [2024-10-12 22:25:35.058293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.643 [2024-10-12 22:25:35.058304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.643 [2024-10-12 22:25:35.058489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.643 [2024-10-12 22:25:35.058649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.643 [2024-10-12 22:25:35.058657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.643 [2024-10-12 22:25:35.058666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.643 [2024-10-12 22:25:35.061091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.643 [2024-10-12 22:25:35.070273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.644 [2024-10-12 22:25:35.070772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.644 [2024-10-12 22:25:35.070788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.644 [2024-10-12 22:25:35.070796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.644 [2024-10-12 22:25:35.070964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.644 [2024-10-12 22:25:35.071127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.644 [2024-10-12 22:25:35.071135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.644 [2024-10-12 22:25:35.071143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.644 [2024-10-12 22:25:35.073561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.644 [2024-10-12 22:25:35.083018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.644 [2024-10-12 22:25:35.083581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.644 [2024-10-12 22:25:35.083611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.644 [2024-10-12 22:25:35.083622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.644 [2024-10-12 22:25:35.083806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.644 [2024-10-12 22:25:35.083966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.644 [2024-10-12 22:25:35.083974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.644 [2024-10-12 22:25:35.083982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.644 [2024-10-12 22:25:35.086415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.644 [2024-10-12 22:25:35.095598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.644 [2024-10-12 22:25:35.096205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.644 [2024-10-12 22:25:35.096235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.644 [2024-10-12 22:25:35.096247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.644 [2024-10-12 22:25:35.096434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.644 [2024-10-12 22:25:35.096596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.644 [2024-10-12 22:25:35.096605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.644 [2024-10-12 22:25:35.096613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.644 [2024-10-12 22:25:35.099033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.644 [2024-10-12 22:25:35.108212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.644 [2024-10-12 22:25:35.108798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.644 [2024-10-12 22:25:35.108828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.644 [2024-10-12 22:25:35.108839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.644 [2024-10-12 22:25:35.109024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.644 [2024-10-12 22:25:35.109192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.644 [2024-10-12 22:25:35.109200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.644 [2024-10-12 22:25:35.109208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.644 [2024-10-12 22:25:35.111625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.644 [2024-10-12 22:25:35.120805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.644 [2024-10-12 22:25:35.121289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.644 [2024-10-12 22:25:35.121306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.644 [2024-10-12 22:25:35.121318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.644 [2024-10-12 22:25:35.121485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.644 [2024-10-12 22:25:35.121644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.644 [2024-10-12 22:25:35.121651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.644 [2024-10-12 22:25:35.121659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.644 [2024-10-12 22:25:35.124075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.133404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.133899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.133913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.133922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.134088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.134256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.134264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.134272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.136684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.146005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.146557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.146587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.146598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.146783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.146944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.146952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.146960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.149385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.158703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.159271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.159301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.159312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.159497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.159657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.159668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.159677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.162099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.171286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.171753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.171769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.171777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.171946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.172113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.172121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.172128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.174542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 5575.80 IOPS, 21.78 MiB/s [2024-10-12T20:25:35.396Z] [2024-10-12 22:25:35.183865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.184366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.184384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.184393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.184561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.184725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.184733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.184741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.187162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.196475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.197076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.197112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.197123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.197308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.197469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.197477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.197485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.199905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.209089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.209672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.209703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.209714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.209898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.210059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.210066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.210075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.212497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.221680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.222313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.222344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.222355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.222541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.222702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.222709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.222717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.225142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.234335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.234962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.234993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.235004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.235200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.235362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.235370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.235378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.237799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.247005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.907 [2024-10-12 22:25:35.247556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.907 [2024-10-12 22:25:35.247587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.907 [2024-10-12 22:25:35.247603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.907 [2024-10-12 22:25:35.247788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.907 [2024-10-12 22:25:35.247949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.907 [2024-10-12 22:25:35.247957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.907 [2024-10-12 22:25:35.247966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.907 [2024-10-12 22:25:35.250398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.907 [2024-10-12 22:25:35.259625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.260113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.260130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.260139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.260308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.260465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.260473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.260481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.262990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.272344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.272852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.272868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.272877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.273044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.273210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.273218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.273226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.275644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.284990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.285505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.285520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.285529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.285697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.285855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.285862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.285874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.288295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.297629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.298121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.298136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.298144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.298313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.298471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.298478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.298486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.300903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.310239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.310736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.310750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.310758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.310925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.311083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.311090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.311097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.313518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.322849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.323384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.323415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.323426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.323614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.323776] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.323784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.323793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.326219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.335554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.336062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.336078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.336086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.336259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.336418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.336425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.336433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.338847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.348188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.348766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.348796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.348807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.348992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.349165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.349174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.349182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.351599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.360786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.361412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.361443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.361454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.361639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.361801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.361809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.361817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.364239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.373421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.374025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.374055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.374066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.374262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.374423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.374431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.374440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.376856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:16.908 [2024-10-12 22:25:35.386047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:16.908 [2024-10-12 22:25:35.386664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.908 [2024-10-12 22:25:35.386694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:16.908 [2024-10-12 22:25:35.386705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:16.908 [2024-10-12 22:25:35.386890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:16.908 [2024-10-12 22:25:35.387051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:16.908 [2024-10-12 22:25:35.387059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:16.908 [2024-10-12 22:25:35.387067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:16.908 [2024-10-12 22:25:35.389491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.170 [2024-10-12 22:25:35.398675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.170 [2024-10-12 22:25:35.399205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.170 [2024-10-12 22:25:35.399236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.170 [2024-10-12 22:25:35.399248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.170 [2024-10-12 22:25:35.399437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.170 [2024-10-12 22:25:35.399598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.170 [2024-10-12 22:25:35.399605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.399614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.402038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.411362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.411818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.411849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.411860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.412045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.412213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.412221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.412233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.414648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.423971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.424583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.424614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.424625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.424811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.424972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.424979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.424988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.427410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.436593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.437215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.437245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.437256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.437445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.437605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.437613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.437621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.440040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.449230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.449816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.449846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.449857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.450041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.450208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.450216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.450224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.452642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.461826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.462398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.462433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.462443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.462628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.462789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.462796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.462805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.465224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.474442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.474953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.474969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.474978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.475152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.475311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.475318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.475326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.477739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.487074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.487607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.487638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.487649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.487835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.487995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.488003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.488011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.490438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.499765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.500384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.500415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.500427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.500614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.500779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.500787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.500796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.503218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.512402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.512880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.512896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.512905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.513074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.513238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.513246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.513253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.515664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.524981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.525453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.525468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.525476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.171 [2024-10-12 22:25:35.525645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.171 [2024-10-12 22:25:35.525803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.171 [2024-10-12 22:25:35.525810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.171 [2024-10-12 22:25:35.525818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.171 [2024-10-12 22:25:35.528236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.171 [2024-10-12 22:25:35.537550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.171 [2024-10-12 22:25:35.538015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.171 [2024-10-12 22:25:35.538030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.171 [2024-10-12 22:25:35.538038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.538210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.538368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.538375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.538383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.540797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.550126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.550736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.550767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.550778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.550963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.551131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.551139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.551147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.553565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.562779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.563399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.563429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.563440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.563626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.563786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.563794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.563802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.566227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.575412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.576033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.576063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.576074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.576269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.576430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.576438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.576446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.578866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.588056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.588604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.588635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.588650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.588834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.588995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.589002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.589010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.591432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.600751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.601407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.601437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.601448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.601635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.601796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.601803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.601812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.604238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.613420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.614008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.614039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.614050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.614242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.614404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.614411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.614419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.616837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.626015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.626580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.626610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.626621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.626805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.626966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.626977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.626985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.629409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.638589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.639052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.639068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.639076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.639252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.639410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.639418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.639426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.641838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.172 [2024-10-12 22:25:35.651160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.172 [2024-10-12 22:25:35.651584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-10-12 22:25:35.651599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.172 [2024-10-12 22:25:35.651608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.172 [2024-10-12 22:25:35.651775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.172 [2024-10-12 22:25:35.651933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.172 [2024-10-12 22:25:35.651940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.172 [2024-10-12 22:25:35.651948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.172 [2024-10-12 22:25:35.654367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 [2024-10-12 22:25:35.663826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.664403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.436 [2024-10-12 22:25:35.664434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.436 [2024-10-12 22:25:35.664445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.436 [2024-10-12 22:25:35.664630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.436 [2024-10-12 22:25:35.664791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.436 [2024-10-12 22:25:35.664798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.436 [2024-10-12 22:25:35.664807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.436 [2024-10-12 22:25:35.667231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 [2024-10-12 22:25:35.676447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.677044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.436 [2024-10-12 22:25:35.677074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.436 [2024-10-12 22:25:35.677085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.436 [2024-10-12 22:25:35.677279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.436 [2024-10-12 22:25:35.677440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.436 [2024-10-12 22:25:35.677448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.436 [2024-10-12 22:25:35.677456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.436 [2024-10-12 22:25:35.679873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 [2024-10-12 22:25:35.689060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.689680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.436 [2024-10-12 22:25:35.689711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.436 [2024-10-12 22:25:35.689722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.436 [2024-10-12 22:25:35.689907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.436 [2024-10-12 22:25:35.690068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.436 [2024-10-12 22:25:35.690075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.436 [2024-10-12 22:25:35.690084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.436 [2024-10-12 22:25:35.692509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3770368 Killed "${NVMF_APP[@]}" "$@" 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.436 [2024-10-12 22:25:35.701693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.702245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.436 [2024-10-12 22:25:35.702276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.436 [2024-10-12 22:25:35.702288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.436 [2024-10-12 22:25:35.702479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.436 [2024-10-12 22:25:35.702640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.436 [2024-10-12 22:25:35.702648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.436 [2024-10-12 22:25:35.702656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.436 [2024-10-12 22:25:35.705080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # nvmfpid=3771933 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # waitforlisten 3771933 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3771933 ']' 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:17.436 22:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:17.436 [2024-10-12 22:25:35.714278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.714815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.436 [2024-10-12 22:25:35.714833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.436 [2024-10-12 22:25:35.714842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.436 [2024-10-12 22:25:35.715012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.436 [2024-10-12 22:25:35.715177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.436 [2024-10-12 22:25:35.715185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.436 [2024-10-12 22:25:35.715193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.436 [2024-10-12 22:25:35.717609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.436 [2024-10-12 22:25:35.726935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.436 [2024-10-12 22:25:35.727503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.727534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.727545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.727733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.727898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.727906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.727914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.730343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.739533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.740035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.740052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.740060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.740237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.740396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.740404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.740412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.742827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.752154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.752624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.752639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.752647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.752817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.752976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.752983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.752991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.755408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.759487] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:17.437 [2024-10-12 22:25:35.759532] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.437 [2024-10-12 22:25:35.764808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.765451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.765481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.765493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.765678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.765838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.765845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.765854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.768279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.777473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.777994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.778010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.778019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.778196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.778355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.778362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.778371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.780786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.790123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.790609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.790640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.790651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.790836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.790997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.791005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.791013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.793436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.802711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.803234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.803265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.803276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.803468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.803629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.803637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.803646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.806070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.815306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.815928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.815959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.815970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.816161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.816323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.816330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.816339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.818764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.827952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.828552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.828583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.828594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.828783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.828943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.828951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.828959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.831381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.840574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.841209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.841240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.841251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.437 [2024-10-12 22:25:35.841340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:17.437 [2024-10-12 22:25:35.841446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.437 [2024-10-12 22:25:35.841607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.437 [2024-10-12 22:25:35.841614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.437 [2024-10-12 22:25:35.841623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.437 [2024-10-12 22:25:35.844049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.437 [2024-10-12 22:25:35.853265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.437 [2024-10-12 22:25:35.853856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.437 [2024-10-12 22:25:35.853890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.437 [2024-10-12 22:25:35.853902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.854089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.854258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.854268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.854280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.856703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.438 [2024-10-12 22:25:35.865906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.438 [2024-10-12 22:25:35.866528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.438 [2024-10-12 22:25:35.866567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.438 [2024-10-12 22:25:35.866580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.866770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.866931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.866939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.866949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.869378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.438 [2024-10-12 22:25:35.869597] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.438 [2024-10-12 22:25:35.869620] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.438 [2024-10-12 22:25:35.869626] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.438 [2024-10-12 22:25:35.869632] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.438 [2024-10-12 22:25:35.869636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.438 [2024-10-12 22:25:35.869765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:17.438 [2024-10-12 22:25:35.869921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.438 [2024-10-12 22:25:35.869923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:17.438 [2024-10-12 22:25:35.878494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.438 [2024-10-12 22:25:35.879047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.438 [2024-10-12 22:25:35.879065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.438 [2024-10-12 22:25:35.879075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.879249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.879409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.879417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.879427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.881844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.438 [2024-10-12 22:25:35.891128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.438 [2024-10-12 22:25:35.891666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.438 [2024-10-12 22:25:35.891700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.438 [2024-10-12 22:25:35.891712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.891902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.892067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.892075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.892091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.894527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.438 [2024-10-12 22:25:35.903716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.438 [2024-10-12 22:25:35.904216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.438 [2024-10-12 22:25:35.904233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.438 [2024-10-12 22:25:35.904243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.904411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.904569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.904576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.904585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.907001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.438 [2024-10-12 22:25:35.916339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.438 [2024-10-12 22:25:35.916811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.438 [2024-10-12 22:25:35.916826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.438 [2024-10-12 22:25:35.916835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.438 [2024-10-12 22:25:35.917002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.438 [2024-10-12 22:25:35.917165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.438 [2024-10-12 22:25:35.917172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.438 [2024-10-12 22:25:35.917181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.438 [2024-10-12 22:25:35.919593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.928921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.929398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.929413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.929421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.929588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.929748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.929755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.929763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.932181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.941509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.941960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.941999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.942010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.942203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.942365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.942373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.942381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.944799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.954151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.954739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.954771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.954783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.954968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.955135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.955144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.955152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.957572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.966762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.967412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.967442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.967454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.967640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.967802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.967810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.967818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.970244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.979433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.979957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.979973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.979981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.980154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.980317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.980324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.980332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.982747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:35.992086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:35.992649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:35.992681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:35.992692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:35.992882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:35.993043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.701 [2024-10-12 22:25:35.993052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.701 [2024-10-12 22:25:35.993060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.701 [2024-10-12 22:25:35.995486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.701 [2024-10-12 22:25:36.004765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.701 [2024-10-12 22:25:36.005396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.701 [2024-10-12 22:25:36.005427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.701 [2024-10-12 22:25:36.005438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.701 [2024-10-12 22:25:36.005625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.701 [2024-10-12 22:25:36.005787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.005796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.005804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.008229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.017421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.018045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.018076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.018088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.018286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.018448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.018456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.018465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.020887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.030075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.030560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.030576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.030585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.030752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.030912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.030919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.030927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.033348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.042677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.043228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.043260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.043271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.043458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.043619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.043626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.043634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.046056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.055264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.055886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.055917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.055928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.056118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.056279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.056287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.056296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.058713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.067904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.068398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.068415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.068428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.068597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.068755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.068763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.068770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.071191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.080520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.081082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.081119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.081131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.081322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.081483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.081490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.081499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.084095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.093204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.093694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.093710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.093718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.093886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.094044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.094052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.094060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.096484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.105810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.106338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.106369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.106380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.106569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.106730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.106741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.106749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.109172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.118503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.119130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.119161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.119172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.119361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.119522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.119530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.119538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.121962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.131156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.131736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.131767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.131778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.131963] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.132129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.702 [2024-10-12 22:25:36.132137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.702 [2024-10-12 22:25:36.132146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.702 [2024-10-12 22:25:36.134564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.702 [2024-10-12 22:25:36.143753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.702 [2024-10-12 22:25:36.144117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.702 [2024-10-12 22:25:36.144134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.702 [2024-10-12 22:25:36.144142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.702 [2024-10-12 22:25:36.144312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.702 [2024-10-12 22:25:36.144470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.703 [2024-10-12 22:25:36.144477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.703 [2024-10-12 22:25:36.144485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.703 [2024-10-12 22:25:36.146914] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.703 [2024-10-12 22:25:36.156394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.703 [2024-10-12 22:25:36.157012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.703 [2024-10-12 22:25:36.157043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.703 [2024-10-12 22:25:36.157055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.703 [2024-10-12 22:25:36.157246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.703 [2024-10-12 22:25:36.157407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.703 [2024-10-12 22:25:36.157415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.703 [2024-10-12 22:25:36.157423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.703 [2024-10-12 22:25:36.159841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.703 [2024-10-12 22:25:36.169033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.703 [2024-10-12 22:25:36.169536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.703 [2024-10-12 22:25:36.169553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.703 [2024-10-12 22:25:36.169562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.703 [2024-10-12 22:25:36.169730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.703 [2024-10-12 22:25:36.169889] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.703 [2024-10-12 22:25:36.169896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.703 [2024-10-12 22:25:36.169903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.703 [2024-10-12 22:25:36.172325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.703 4646.50 IOPS, 18.15 MiB/s [2024-10-12T20:25:36.192Z] [2024-10-12 22:25:36.182791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.703 [2024-10-12 22:25:36.183425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.703 [2024-10-12 22:25:36.183456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.703 [2024-10-12 22:25:36.183467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.703 [2024-10-12 22:25:36.183652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.703 [2024-10-12 22:25:36.183813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.703 [2024-10-12 22:25:36.183821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.703 [2024-10-12 22:25:36.183829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.703 [2024-10-12 22:25:36.186256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.195458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.195983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.195999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.196007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.196186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.196345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.196353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.196361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.198775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.208107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.208561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.208592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.208603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.208790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.208950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.208958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.208966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.211390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.220719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.221429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.221459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.221471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.221660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.221820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.221828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.221837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.224262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.233312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.233831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.233847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.233856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.234025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.234189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.234197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.234209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.236623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.245946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.246304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.246320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.246328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.246498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.246656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.246664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.246671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.249087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.258561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.259054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.259069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.259077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.259256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.259415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.259423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.259431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.261847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.271176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.271549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.966 [2024-10-12 22:25:36.271564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.966 [2024-10-12 22:25:36.271572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.966 [2024-10-12 22:25:36.271739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.966 [2024-10-12 22:25:36.271896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.966 [2024-10-12 22:25:36.271904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.966 [2024-10-12 22:25:36.271912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.966 [2024-10-12 22:25:36.274324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.966 [2024-10-12 22:25:36.283788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.966 [2024-10-12 22:25:36.284220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.284235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.284243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.284412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.284571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.284578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.284586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.287000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.296478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.296942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.296956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.296964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.297136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.297297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.297304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.297312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.299749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.309080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.309561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.309576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.309584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.309751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.309909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.309916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.309924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.312342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.321666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.322196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.322211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.322219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.322392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.322557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.322565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.322573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.324990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.334324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.334938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.334970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.334981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.335172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.335334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.335342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.335350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.337769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.346962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.347420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.347449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.347461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.347645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.347807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.347816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.347824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.350252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.359584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.360110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.360126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.360135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.360305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.360465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.360473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.360482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.362903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.372233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.372847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.372878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.372889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.373075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.373250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.373259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.373268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.375686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.384876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.385479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.385510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.385520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.385706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.385866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.385873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.385882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.388305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.397504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.397979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.397996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.398004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.398179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.398342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.398350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.398358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.400770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.967 [2024-10-12 22:25:36.410098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.967 [2024-10-12 22:25:36.410566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.967 [2024-10-12 22:25:36.410581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.967 [2024-10-12 22:25:36.410594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.967 [2024-10-12 22:25:36.410761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.967 [2024-10-12 22:25:36.410919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.967 [2024-10-12 22:25:36.410927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.967 [2024-10-12 22:25:36.410935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.967 [2024-10-12 22:25:36.413353] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.968 [2024-10-12 22:25:36.422678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.968 [2024-10-12 22:25:36.423209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.968 [2024-10-12 22:25:36.423240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.968 [2024-10-12 22:25:36.423252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.968 [2024-10-12 22:25:36.423437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.968 [2024-10-12 22:25:36.423598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.968 [2024-10-12 22:25:36.423606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.968 [2024-10-12 22:25:36.423614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.968 [2024-10-12 22:25:36.426037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.968 [2024-10-12 22:25:36.435376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.968 [2024-10-12 22:25:36.435896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.968 [2024-10-12 22:25:36.435913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.968 [2024-10-12 22:25:36.435921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.968 [2024-10-12 22:25:36.436090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.968 [2024-10-12 22:25:36.436255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.968 [2024-10-12 22:25:36.436263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.968 [2024-10-12 22:25:36.436271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.968 [2024-10-12 22:25:36.438685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:17.968 [2024-10-12 22:25:36.448019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:17.968 [2024-10-12 22:25:36.448525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.968 [2024-10-12 22:25:36.448540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:17.968 [2024-10-12 22:25:36.448549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:17.968 [2024-10-12 22:25:36.448716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:17.968 [2024-10-12 22:25:36.448874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:17.968 [2024-10-12 22:25:36.448885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:17.968 [2024-10-12 22:25:36.448893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:17.968 [2024-10-12 22:25:36.451309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.460634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.461339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.230 [2024-10-12 22:25:36.461370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.230 [2024-10-12 22:25:36.461381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.230 [2024-10-12 22:25:36.461566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.230 [2024-10-12 22:25:36.461727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.230 [2024-10-12 22:25:36.461735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.230 [2024-10-12 22:25:36.461743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.230 [2024-10-12 22:25:36.464168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.473221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.473694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.230 [2024-10-12 22:25:36.473710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.230 [2024-10-12 22:25:36.473718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.230 [2024-10-12 22:25:36.473888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.230 [2024-10-12 22:25:36.474048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.230 [2024-10-12 22:25:36.474055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.230 [2024-10-12 22:25:36.474063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.230 [2024-10-12 22:25:36.476485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.485813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.486318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.230 [2024-10-12 22:25:36.486350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.230 [2024-10-12 22:25:36.486361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.230 [2024-10-12 22:25:36.486548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.230 [2024-10-12 22:25:36.486709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.230 [2024-10-12 22:25:36.486717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.230 [2024-10-12 22:25:36.486725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.230 [2024-10-12 22:25:36.489160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.498501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.498878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.230 [2024-10-12 22:25:36.498894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.230 [2024-10-12 22:25:36.498903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.230 [2024-10-12 22:25:36.499073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.230 [2024-10-12 22:25:36.499240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.230 [2024-10-12 22:25:36.499248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.230 [2024-10-12 22:25:36.499257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.230 [2024-10-12 22:25:36.501677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.511257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.511750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.230 [2024-10-12 22:25:36.511780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.230 [2024-10-12 22:25:36.511788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.230 [2024-10-12 22:25:36.511953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.230 [2024-10-12 22:25:36.512112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.230 [2024-10-12 22:25:36.512119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.230 [2024-10-12 22:25:36.512125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.230 [2024-10-12 22:25:36.514523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.230 [2024-10-12 22:25:36.523973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.230 [2024-10-12 22:25:36.524546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.524576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.524585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.524750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.524901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.524907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.524912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.527316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 [2024-10-12 22:25:36.536628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.537071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.537107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.537117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.537287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.537438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.537444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.537449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.539849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 [2024-10-12 22:25:36.549311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.549869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.549899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.549907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.550072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.550227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.550233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.550239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.552637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.231 [2024-10-12 22:25:36.561950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.562552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.562583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.562591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.562756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.562907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.562914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.562921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.565324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 [2024-10-12 22:25:36.574635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.575101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.575120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.575126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.575279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.575427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.575433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.575438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.577830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 [2024-10-12 22:25:36.587280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.587739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.587751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.587757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.587905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.588053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.588059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.588064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.590469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.231 [2024-10-12 22:25:36.599909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.600461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.600492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.600501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.600665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.600817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.600823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.600829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.603232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 [2024-10-12 22:25:36.603915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.231 [2024-10-12 22:25:36.612532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.613033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.613064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.613076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.613247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.613399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.613405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.613410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.615807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.231 [2024-10-12 22:25:36.625119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.625610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.625640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.231 [2024-10-12 22:25:36.625649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.231 [2024-10-12 22:25:36.625814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.231 [2024-10-12 22:25:36.625966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.231 [2024-10-12 22:25:36.625972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.231 [2024-10-12 22:25:36.625978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.231 [2024-10-12 22:25:36.628386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.231 Malloc0 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.231 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.231 [2024-10-12 22:25:36.637692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.231 [2024-10-12 22:25:36.638206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.231 [2024-10-12 22:25:36.638237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.232 [2024-10-12 22:25:36.638246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.232 [2024-10-12 22:25:36.638411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.232 [2024-10-12 22:25:36.638563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.232 [2024-10-12 22:25:36.638569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.232 [2024-10-12 22:25:36.638574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.232 [2024-10-12 22:25:36.640977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 [2024-10-12 22:25:36.650311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.232 [2024-10-12 22:25:36.650885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.232 [2024-10-12 22:25:36.650915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.232 [2024-10-12 22:25:36.650924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.232 [2024-10-12 22:25:36.651089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.232 [2024-10-12 22:25:36.651247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.232 [2024-10-12 22:25:36.651254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.232 [2024-10-12 22:25:36.651259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.232 [2024-10-12 22:25:36.653655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:18.232 [2024-10-12 22:25:36.662958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.232 [2024-10-12 22:25:36.663578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.232 [2024-10-12 22:25:36.663609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d0ed0 with addr=10.0.0.2, port=4420 00:37:18.232 [2024-10-12 22:25:36.663618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d0ed0 is same with the state(6) to be set 00:37:18.232 [2024-10-12 22:25:36.663783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d0ed0 (9): Bad file descriptor 00:37:18.232 [2024-10-12 22:25:36.663934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:18.232 [2024-10-12 22:25:36.663941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:18.232 [2024-10-12 22:25:36.663946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:18.232 [2024-10-12 22:25:36.665950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.232 [2024-10-12 22:25:36.666349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.232 22:25:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3770913 00:37:18.232 [2024-10-12 22:25:36.675523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:18.232 [2024-10-12 22:25:36.709223] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:19.743 4858.29 IOPS, 18.98 MiB/s [2024-10-12T20:25:39.615Z] 5890.62 IOPS, 23.01 MiB/s [2024-10-12T20:25:40.558Z] 6664.67 IOPS, 26.03 MiB/s [2024-10-12T20:25:41.500Z] 7298.10 IOPS, 28.51 MiB/s [2024-10-12T20:25:42.442Z] 7816.82 IOPS, 30.53 MiB/s [2024-10-12T20:25:43.384Z] 8255.83 IOPS, 32.25 MiB/s [2024-10-12T20:25:44.326Z] 8621.46 IOPS, 33.68 MiB/s [2024-10-12T20:25:45.284Z] 8923.93 IOPS, 34.86 MiB/s [2024-10-12T20:25:45.284Z] 9199.53 IOPS, 35.94 MiB/s 00:37:26.795 Latency(us) 00:37:26.795 [2024-10-12T20:25:45.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:26.795 Verification LBA range: start 0x0 length 0x4000 00:37:26.795 Nvme1n1 : 15.01 9204.67 35.96 13553.46 0.00 5605.90 366.93 16165.55 00:37:26.795 [2024-10-12T20:25:45.284Z] =================================================================================================================== 00:37:26.795 [2024-10-12T20:25:45.284Z] Total : 9204.67 35.96 13553.46 0.00 5605.90 366.93 16165.55 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:27.087 rmmod nvme_tcp 00:37:27.087 rmmod nvme_fabrics 00:37:27.087 rmmod nvme_keyring 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@513 -- # '[' -n 3771933 ']' 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # killprocess 3771933 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3771933 ']' 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3771933 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3771933 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3771933' 00:37:27.087 killing process with pid 3771933 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3771933 00:37:27.087 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3771933 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-save 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:27.355 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@787 -- # iptables-restore 00:37:27.356 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:27.356 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:27.356 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.356 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:27.356 22:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:29.364 00:37:29.364 real 0m28.069s 00:37:29.364 user 1m3.264s 00:37:29.364 sys 0m7.558s 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:29.364 ************************************ 00:37:29.364 END TEST nvmf_bdevperf 00:37:29.364 ************************************ 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:29.364 ************************************ 00:37:29.364 START TEST nvmf_target_disconnect 00:37:29.364 ************************************ 00:37:29.364 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:29.364 * Looking for test storage... 00:37:29.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:29.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.625 --rc genhtml_branch_coverage=1 00:37:29.625 --rc genhtml_function_coverage=1 00:37:29.625 --rc genhtml_legend=1 00:37:29.625 --rc geninfo_all_blocks=1 00:37:29.625 --rc geninfo_unexecuted_blocks=1 00:37:29.625 00:37:29.625 ' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:29.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.625 --rc genhtml_branch_coverage=1 00:37:29.625 --rc genhtml_function_coverage=1 00:37:29.625 --rc genhtml_legend=1 00:37:29.625 --rc geninfo_all_blocks=1 00:37:29.625 --rc geninfo_unexecuted_blocks=1 00:37:29.625 00:37:29.625 ' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:29.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.625 --rc genhtml_branch_coverage=1 00:37:29.625 --rc genhtml_function_coverage=1 00:37:29.625 --rc genhtml_legend=1 00:37:29.625 --rc geninfo_all_blocks=1 00:37:29.625 --rc geninfo_unexecuted_blocks=1 00:37:29.625 00:37:29.625 ' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:29.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.625 --rc genhtml_branch_coverage=1 00:37:29.625 --rc genhtml_function_coverage=1 00:37:29.625 --rc genhtml_legend=1 00:37:29.625 --rc geninfo_all_blocks=1 00:37:29.625 --rc geninfo_unexecuted_blocks=1 00:37:29.625 00:37:29.625 ' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:29.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:29.625 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.626 22:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:37.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:37.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:37.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:37.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.771 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:37:37.772 00:37:37.772 --- 10.0.0.2 ping statistics --- 00:37:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.772 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:37:37.772 00:37:37.772 --- 10.0.0.1 ping statistics --- 00:37:37.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.772 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # return 0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:37.772 ************************************ 00:37:37.772 START TEST nvmf_target_disconnect_tc1 00:37:37.772 ************************************ 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:37.772 [2024-10-12 22:25:55.636944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:37.772 [2024-10-12 22:25:55.637032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6d0e0 with addr=10.0.0.2, port=4420 00:37:37.772 [2024-10-12 22:25:55.637074] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:37.772 [2024-10-12 22:25:55.637086] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:37.772 [2024-10-12 22:25:55.637095] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:37.772 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:37.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:37.772 Initializing NVMe Controllers 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:37.772 00:37:37.772 real 0m0.133s 00:37:37.772 user 0m0.053s 00:37:37.772 sys 0m0.080s 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:37.772 ************************************ 00:37:37.772 END TEST nvmf_target_disconnect_tc1 00:37:37.772 ************************************ 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:37.772 ************************************ 00:37:37.772 START TEST nvmf_target_disconnect_tc2 00:37:37.772 ************************************ 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3777980 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3777980 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3777980 ']' 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:37.772 22:25:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:37.772 [2024-10-12 22:25:55.793174] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:37.772 [2024-10-12 22:25:55.793235] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.772 [2024-10-12 22:25:55.882732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:37.772 [2024-10-12 22:25:55.932613] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.772 [2024-10-12 22:25:55.932671] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.772 [2024-10-12 22:25:55.932683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.772 [2024-10-12 22:25:55.932692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.772 [2024-10-12 22:25:55.932701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.772 [2024-10-12 22:25:55.932886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:37.772 [2024-10-12 22:25:55.933045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:37.772 [2024-10-12 22:25:55.933212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:37.772 [2024-10-12 22:25:55.933387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 Malloc0 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 [2024-10-12 22:25:56.703905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 [2024-10-12 22:25:56.744284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3778321 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:38.347 22:25:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:40.917 22:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3777980 00:37:40.917 22:25:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 [2024-10-12 22:25:58.781879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Read completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 Write completed with error (sct=0, sc=8) 00:37:40.917 starting I/O failed 00:37:40.917 [2024-10-12 22:25:58.782260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:40.917 [2024-10-12 22:25:58.782694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.782719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.783062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.783075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.783504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.783557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.783899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.783913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.784044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.784055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.784363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.784415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.784764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.784777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.785112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.785128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.917 qpair failed and we were unable to recover it. 00:37:40.917 [2024-10-12 22:25:58.785458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.917 [2024-10-12 22:25:58.785470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.785696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.785707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.785988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.785999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.786220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.786231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.786533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.786543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.786852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.786863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.787176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.787187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.787409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.787420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.787763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.787773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.787996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.788007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.788221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.788232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.788538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.788548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.788872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.788882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.789158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.789169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.789518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.789528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.789838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.789849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.790055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.790066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.790424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.790435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.790794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.790805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.791137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.791149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.791507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.791517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.791807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.791818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.792147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.792158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.792476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.792486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.792799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.792810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.793150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.793160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.793402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.793413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.793755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.793767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.794066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.794077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.794392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.794403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.794676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.794686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.795026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.795040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.795358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.795369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.795717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.795727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.796059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.796069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.796385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.796396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.796712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.796722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.796937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.796948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.797272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.797284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.797608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.797618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.918 [2024-10-12 22:25:58.797942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.918 [2024-10-12 22:25:58.797953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.918 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.798262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.798272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.798491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.798501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.798786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.798795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.799084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.799094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.799310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.799320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.799677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.799687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.800011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.800021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.800341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.800352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.800641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.800651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.800970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.800981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.801340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.801350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.801686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.801696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.802085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.802094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.802463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.802473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.802819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.802828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.803140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.803149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.803441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.803451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.803778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.803787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.804082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.804091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.804477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.804488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.804789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.804798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.805097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.805111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.805474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.805485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.805677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.805687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.806036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.806048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.806364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.806378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.806603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.806615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.806961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.806974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.807362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.807374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.807675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.807686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.808028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.808044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.808377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.808390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.808656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.808668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.808871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.808885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.809210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.809223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.809407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.809420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.809727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.809739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.810056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.810068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.810367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.810379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.919 qpair failed and we were unable to recover it. 00:37:40.919 [2024-10-12 22:25:58.810680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.919 [2024-10-12 22:25:58.810693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.811003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.811016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.811356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.811368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.811716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.811728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.812021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.812033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.812344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.812357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.812674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.812686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.813004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.813016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.813419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.813431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.813757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.813770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.813966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.813979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.814362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.814375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.814674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.814686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.815012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.815024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.815418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.815431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.815748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.815761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.816079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.816096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.816397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.816414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.816753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.816769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.816984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.817000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.817213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.817230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.817548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.817564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.817896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.817912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.818252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.818269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.818600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.818616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.818945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.818962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.819297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.819314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.819638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.819654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.819981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.819997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.820318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.820335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.820550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.820567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.820881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.820901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.821207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.821224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.821534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.821551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.821896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.821912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.822251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.822269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.822471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.822486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.822805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.822821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.823138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.823155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.823485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.823502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.920 [2024-10-12 22:25:58.823825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.920 [2024-10-12 22:25:58.823841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.920 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.824163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.824179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.824512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.824528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.824848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.824864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.825182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.825199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.825585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.825601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.825846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.825862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.826188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.826205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.826524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.826540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.826873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.826891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.827082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.827100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.827449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.827466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.827783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.827800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.828144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.828165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.828500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.828520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.828843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.828863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.829210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.829231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.829619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.829639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.829990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.830011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.830325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.830346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.830670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.830690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.831000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.831021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.831345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.831365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.831674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.831694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.832029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.832050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.832432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.832452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.832783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.832803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.833148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.833169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.833512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.833532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.833850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.833870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.834187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.834210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.834628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.834652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.834982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.835003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.835204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.921 [2024-10-12 22:25:58.835225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.921 qpair failed and we were unable to recover it. 00:37:40.921 [2024-10-12 22:25:58.835539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.835559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.835895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.835915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.836230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.836252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.836585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.836605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.836940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.836960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.837168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.837191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.837557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.837577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.837987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.838008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.838357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.838379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.838713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.838734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.839060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.839080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.839436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.839457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.839750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.839770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.840114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.840135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.840505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.840524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.840863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.840883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.841217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.841245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.841660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.841688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.842028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.842057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.842400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.842428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.842794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.842823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.843194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.843222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.843584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.843611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.843983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.844010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.844273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.844303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.844651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.844678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.845048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.845076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.845348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.845379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.845760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.845787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.846150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.846180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.846548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.846576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.846948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.846975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.847348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.847377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.847712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.847739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.848121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.848149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.848513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.848542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.848892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.848919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.849290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.849326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.849682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.849709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.850075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.922 [2024-10-12 22:25:58.850112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.922 qpair failed and we were unable to recover it. 00:37:40.922 [2024-10-12 22:25:58.850477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.850504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.850839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.850866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.851235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.851265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.851627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.851655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.852015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.852044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.852398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.852426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.852792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.852821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.853180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.853209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.853571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.853600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.853944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.853971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.854333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.854362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.854724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.854752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.855124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.855153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.855517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.855546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.855814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.855841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.856254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.856283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.856505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.856535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.856889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.856917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.857283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.857312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.857663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.857691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.858039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.858066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.858460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.858489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.858849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.858877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.859220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.859249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.859612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.859641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.859893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.859920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.860201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.860232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.860609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.860636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.860963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.860992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.861345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.861374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.861749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.861777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.862145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.862173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.862413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.862440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.862814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.862841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.863204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.863233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.863577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.863604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.863966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.863993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.864394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.864429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.864767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.864796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.865136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.865166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.923 [2024-10-12 22:25:58.865536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.923 [2024-10-12 22:25:58.865566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.923 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.865939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.865968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.866314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.866344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.866696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.866723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.867066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.867094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.867340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.867372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.867601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.867628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.867990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.868018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.868366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.868396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.868758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.868787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.869040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.869068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.869447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.869475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.869842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.869870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.870229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.870257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.870630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.870658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.871021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.871048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.871489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.871518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.871814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.871843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.872216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.872245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.872590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.872618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.872956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.872983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.873359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.873388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.873739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.873766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.874131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.874160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.874562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.874590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.874935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.874963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.875331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.875359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.875789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.875817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.876149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.876179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.876555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.876583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.876994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.877022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.877369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.877399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.877759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.877788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.878157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.878185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.878570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.878596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.878943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.878972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.879335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.879364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.879729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.879762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.880167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.880196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.880457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.880484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.880734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.924 [2024-10-12 22:25:58.880761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.924 qpair failed and we were unable to recover it. 00:37:40.924 [2024-10-12 22:25:58.881139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.881170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.881527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.881555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.881914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.881943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.882293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.882321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.882708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.882735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.883109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.883138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.883490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.883517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.883886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.883914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.884279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.884307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.884674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.884702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.885066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.885094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.885451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.885481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.885825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.885853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.886208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.886237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.886558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.886587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.886933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.886960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.887342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.887370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.887745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.887772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.888160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.888189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.888587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.888615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.888968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.888996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.889346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.889375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.889539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.889569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.889983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.890012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.890444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.890472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.890804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.890832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.891194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.891223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.891591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.891618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.892049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.892076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.892434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.892463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.892834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.892861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.893239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.893268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.893656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.893683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.894083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.894120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.894513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.894540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.925 qpair failed and we were unable to recover it. 00:37:40.925 [2024-10-12 22:25:58.894902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.925 [2024-10-12 22:25:58.894929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.895283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.895318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.895674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.895702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.896065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.896091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.896444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.896472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.896842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.896869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.897247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.897276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.897655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.897683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.898042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.898070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.898426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.898455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.898819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.898845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.899221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.899249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.899686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.899714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.900038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.900066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.900424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.900453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.900819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.900848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.901215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.901244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.901674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.901702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.902052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.902080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.902353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.902383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.902753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.902781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.903138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.903168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.903526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.903554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.903918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.903946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.904170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.904199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.904586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.904615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.904982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.905011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.905366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.905395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.905754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.905782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.906154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.906183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.906576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.906603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.906971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.906999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.907359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.907388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.907741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.907769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.908132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.908161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.908558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.908585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.908945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.908973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.909332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.909361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.909747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.909775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.910005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.910032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.926 [2024-10-12 22:25:58.910400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.926 [2024-10-12 22:25:58.910429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.926 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.910794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.910827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.911067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.911094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.911327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.911356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.911707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.911733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.912095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.912132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.912513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.912540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.912910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.912937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.913290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.913319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.913616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.913644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.913910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.913937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.914285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.914313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.914678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.914706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.915072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.915099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.915487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.915515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.915881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.915910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.916271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.916299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.916654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.916682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.916996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.917025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.917413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.917443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.917802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.917829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.918182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.918211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.918581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.918609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.918965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.918993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.919361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.919389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.919759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.919788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.920145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.920174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.920528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.920554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.920805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.920834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.921190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.921219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.921577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.921606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.921981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.922009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.922343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.922373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.922749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.922776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.923162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.923191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.923436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.923463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.923839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.923866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.924227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.924256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.924597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.924624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.924867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.924898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.925263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.927 [2024-10-12 22:25:58.925293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.927 qpair failed and we were unable to recover it. 00:37:40.927 [2024-10-12 22:25:58.925654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.925688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.926042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.926070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.926382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.926410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.926776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.926803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.927165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.927195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.927558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.927585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.927955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.927983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.928384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.928413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.928769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.928796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.929169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.929198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.929551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.929578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.929953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.929980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.930246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.930274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.930505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.930536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.930897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.930925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.931271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.931299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.931666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.931693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.932041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.932068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.932429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.932458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.932824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.932852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.933216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.933246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.933611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.933638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.933995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.934022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.934380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.934409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.934754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.934781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.935146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.935176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.935415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.935442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.935806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.935833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.936210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.936238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.936620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.936648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.937009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.937036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.937385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.937414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.937780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.937809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.938169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.938197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.938562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.938597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.938926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.938954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.939313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.939341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.939707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.939734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.940079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.940114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.940496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.940524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.928 qpair failed and we were unable to recover it. 00:37:40.928 [2024-10-12 22:25:58.940888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.928 [2024-10-12 22:25:58.940915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.941277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.941306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.941658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.941686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.942052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.942079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.942447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.942475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.942816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.942843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.943282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.943310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.943675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.943704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.944073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.944100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.944477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.944504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.944863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.944891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.945249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.945278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.945610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.945637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.946038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.946065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.946514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.946543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.946809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.946836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.947192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.947220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.947559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.947588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.947834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.947865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.948220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.948249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.948592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.948619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.948977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.949005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.949367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.949397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.949735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.949763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.950132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.950161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.950523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.950550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.950887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.950915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.951280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.951316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.951674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.951702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.952054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.952081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.952428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.952456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.952804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.952832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.953199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.953228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.953581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.953608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.953994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.954022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.954361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.954390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.954752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.954781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.955130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.955158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.955521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.955548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.955919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.955946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.929 [2024-10-12 22:25:58.956201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.929 [2024-10-12 22:25:58.956229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.929 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.956601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.956628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.956987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.957013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.957378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.957406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.957772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.957798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.958051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.958077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.958352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.958382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.958731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.958758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.959143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.959170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.959513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.959541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.959784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.959813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.960143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.960171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.960553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.960580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.960949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.960976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.961329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.961358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.961731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.961760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.962141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.962172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.962410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.962440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.962735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.962763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.963146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.963175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.963524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.963552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.963925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.963953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.964321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.964351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.964709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.964738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.964990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.965018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.965348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.965378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.965734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.965762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.966114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.966149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.966387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.966416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.966775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.966805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.967163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.967192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.967542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.967571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.967836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.967864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.968213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.968242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.968595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.968623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.968983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.969012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.930 [2024-10-12 22:25:58.969384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.930 [2024-10-12 22:25:58.969414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.930 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.969794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.969822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.970087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.970132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.970500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.970529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.970893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.970922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.971287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.971317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.971455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.971486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.971760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.971790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.972146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.972176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.972565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.972593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.972958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.972987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.973330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.973360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.973731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.973760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.974117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.974147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.974499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.974527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.974899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.974928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.975304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.975334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.975570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.975599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.975970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.975998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.976362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.976392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.976760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.976788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.977156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.977187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.977461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.977491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.977787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.977817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.978059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.978087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.978471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.978499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.978915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.978942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.979226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.979254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.979629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.979656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.980028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.980056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.980417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.980446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.980787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.980822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.981166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.981195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.981580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.981607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.981987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.982015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.982374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.982403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.982747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.982776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.983134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.983163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.983513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.983540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.983911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.983939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.984197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.931 [2024-10-12 22:25:58.984225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.931 qpair failed and we were unable to recover it. 00:37:40.931 [2024-10-12 22:25:58.984458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.984486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.984854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.984882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.985169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.985199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.985439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.985466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.985901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.985931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.986293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.986322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.986689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.986717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.986972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.987001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.987362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.987391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.987765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.987793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.988170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.988198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.988551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.988579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.989022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.989050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.989438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.989476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.989851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.989879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.990224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.990254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.990647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.990675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.991040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.991069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.991436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.991466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.991818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.991847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.992179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.992208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.992574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.992602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.992985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.993014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.993354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.993382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.993632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.993660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.994002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.994032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.994420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.994449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.994816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.994845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.995222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.995251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.995622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.995650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.996026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.996060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.996420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.996448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.996815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.996842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.997219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.997248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.997638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.997666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.997935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.997961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.998296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.998325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.998703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.998732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.999121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.999150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.999466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.999493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.932 qpair failed and we were unable to recover it. 00:37:40.932 [2024-10-12 22:25:58.999756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.932 [2024-10-12 22:25:58.999783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.000156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.000186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.000544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.000571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.000826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.000855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.001227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.001257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.001622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.001650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.002020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.002048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.002431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.002460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.002804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.002832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.003199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.003230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.003602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.003629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.003990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.004019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.004398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.004427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.004854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.004884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.005250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.005278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.005646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.005673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.006040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.006069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.006458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.006487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.006833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.006861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.007235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.007264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.007522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.007551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.007992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.008020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.008356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.008386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.008776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.008803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.009178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.009206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.009431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.009462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.009825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.009853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.010214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.010245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.010504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.010533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.010797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.010825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.011174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.011216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.011626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.011655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.012035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.012063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.012445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.012477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.012858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.012887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.013229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.013260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.013600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.013631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.014002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.014030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.014411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.014441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.014803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.014830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.933 [2024-10-12 22:25:59.015084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.933 [2024-10-12 22:25:59.015124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.933 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.015505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.015533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.015898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.015925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.016309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.016340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.016602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.016630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.016982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.017011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.017438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.017468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.017799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.017827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.018188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.018218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.018627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.018656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.019022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.019049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.019298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.019326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.019700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.019728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.019967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.019995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.020369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.020399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.020772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.020800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.021032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.021060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.021488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.021519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.021754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.021783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.022133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.022162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.022405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.022432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.022784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.022812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.023156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.023184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.023566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.023594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.023962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.023990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.024359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.024389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.024639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.024668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.025033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.025062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.025329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.025358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.025694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.025723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.025969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.026009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.026338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.026367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.026739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.026768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.027131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.027160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.027529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.027556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.027921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.027949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.934 [2024-10-12 22:25:59.028211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.934 [2024-10-12 22:25:59.028244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.934 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.028625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.028652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.028990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.029019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.029384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.029413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.029791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.029819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.030195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.030224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.030562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.030593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.030930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.030958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.031204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.031233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.031614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.031641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.031980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.032009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.032258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.032287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.032622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.032651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.033025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.033053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.033422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.033452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.033801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.033828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.034204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.034234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.034602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.034629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.035046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.035074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.035315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.035344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.035704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.035732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.036137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.036170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.036414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.036442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.036797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.036825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.037198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.037227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.037619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.037646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.038015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.038042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.038408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.038438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.038871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.038898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.039294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.039324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.039684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.039711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.040072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.040100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.040460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.040489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.040859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.040887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.041254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.041290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.041536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.041563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.041870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.041898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.042367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.042397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.042741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.042770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.043130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.043159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.043540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.935 [2024-10-12 22:25:59.043567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.935 qpair failed and we were unable to recover it. 00:37:40.935 [2024-10-12 22:25:59.043917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.043944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.044218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.044248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.044499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.044529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.044903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.044931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.045285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.045314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.045674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.045702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.046076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.046117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.046482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.046511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.046856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.046886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.047339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.047369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.047611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.047639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.047887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.047919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.048290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.048320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.048690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.048718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.049087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.049127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.049465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.049495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.049765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.049793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.050146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.050177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.050556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.050584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.050955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.050983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.051420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.051451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.051818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.051846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.051991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.052020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.052426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.052455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.052814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.052844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.053090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.053135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.053553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.053582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.053936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.053963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.054325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.054354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.054723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.054751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.055125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.055155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.055513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.055541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.055907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.055936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.056284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.056321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.056690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.056719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.057037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.057065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.057427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.057458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.057798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.057827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.058070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.058098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.058376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.936 [2024-10-12 22:25:59.058405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.936 qpair failed and we were unable to recover it. 00:37:40.936 [2024-10-12 22:25:59.058839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.058867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.059262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.059291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.059667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.059695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.060077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.060116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.060479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.060509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.060892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.060920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.061181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.061211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.061567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.061596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.061858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.061885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.062130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.062159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.062524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.062552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.062917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.062945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.063334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.063363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.063751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.063779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.064034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.064062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.064410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.064440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.064804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.064831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.065197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.065227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.065591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.065619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.065993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.066021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.066394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.066433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.066765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.066793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.067215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.067244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.067598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.067627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.067993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.068022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.068366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.068396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.068749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.068777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.069146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.069176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.069467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.069496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.069728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.069765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.070146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.070175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.070550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.070578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.071021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.071049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.071409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.071444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.071788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.071817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.072048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.072075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.072473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.072502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.072864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.072892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.073133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.073162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.073552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.073580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.937 [2024-10-12 22:25:59.073841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.937 [2024-10-12 22:25:59.073869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.937 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.074123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.074152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.074554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.074583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.074817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.074845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.075232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.075262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.075598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.075627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.076035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.076063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.076443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.076473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.076829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.076857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.077236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.077266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.077631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.077659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.078101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.078140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.078494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.078521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.078880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.078908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.079162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.079192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.079448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.079479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.079754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.079782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.080136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.080167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.080508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.080538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.080872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.080900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.081246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.081278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.081521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.081549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.081927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.081957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.082286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.082315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.082702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.082730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.082961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.082989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.083151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.083180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.083528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.083558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.083807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.083837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.084235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.084265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.084643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.084671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.085029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.085057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.085447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.085477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.085841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.085880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.086235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.086264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.086626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.086654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.087027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.938 [2024-10-12 22:25:59.087055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.938 qpair failed and we were unable to recover it. 00:37:40.938 [2024-10-12 22:25:59.087429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.087458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.087810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.087839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.088205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.088235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.088610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.088638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.088906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.088933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.089173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.089201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.089545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.089573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.089950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.089978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.090337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.090366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.090711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.090739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.091115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.091146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.091535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.091563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.091928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.091956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.092175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.092204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.092447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.092478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.092822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.092850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.093224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.093253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.093621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.093650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.094026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.094054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.094416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.094446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.094813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.094842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.095092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.095131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.095494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.095522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.095776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.095808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.096179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.096209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.096595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.096623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.097066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.097093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.097365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.097394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.097751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.097778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.098149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.098178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.098552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.098580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.098941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.098969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.099337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.099366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.099716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.099745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.100136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.100167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.100515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.100544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.100912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.100947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.101279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.101308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.101677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.101704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.102080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.939 [2024-10-12 22:25:59.102130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.939 qpair failed and we were unable to recover it. 00:37:40.939 [2024-10-12 22:25:59.102478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.102506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.102871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.102900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.103254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.103284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.103668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.103696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.104058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.104086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.104481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.104511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.104752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.104784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.105157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.105187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.105515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.105543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.105798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.105829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.106086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.106124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.106491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.106519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.106880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.106908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.107271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.107300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.107645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.107672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.108036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.108065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.108461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.108491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.108853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.108882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.109179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.109208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.109641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.109669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.110032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.110060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.110421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.110450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.110824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.110851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.111226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.111256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.111640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.111667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.112035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.112062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.112462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.112492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.112854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.112882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.113252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.113291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.113620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.113647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.114015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.114044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.114402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.114430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.114770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.114800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.115159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.115189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.115558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.115586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.116028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.116056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.116413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.116450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.116805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.116833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.117069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.117097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.117456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.940 [2024-10-12 22:25:59.117484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.940 qpair failed and we were unable to recover it. 00:37:40.940 [2024-10-12 22:25:59.117847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.117875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.118252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.118282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.118641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.118669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.119041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.119069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.119425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.119454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.119818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.119845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.120208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.120237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.120606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.120634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.120996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.121024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.121396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.121425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.121776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.121804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.122175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.122205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.122669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.122698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.123038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.123066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.123300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.123331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.123574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.123605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.124025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.124053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.124388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.124418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.124777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.124805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.125170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.125200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.125562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.125590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.125964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.125992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.126231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.126261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.126533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.126562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.126951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.126979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.127255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.127284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.127537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.127567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.127847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.127874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.128232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.128261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.128627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.128655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.129033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.129060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.129498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.129527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.129887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.129914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.130278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.130308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.130674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.130701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.131078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.131113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.131477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.131504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.131862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.131890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.132257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.132286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.132650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.132678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.941 [2024-10-12 22:25:59.133053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.941 [2024-10-12 22:25:59.133081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.941 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.133454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.133482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.133850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.133878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.134236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.134266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.134530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.134556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.134905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.134933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.135288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.135317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.135570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.135600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.135974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.136003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.136372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.136402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.136770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.136798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.137173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.137202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.137567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.137594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.137962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.137989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.138338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.138367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.138738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.138766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.139128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.139156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.139510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.139537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.139894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.139921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.140298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.140328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.140700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.140727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.141086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.141122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.141478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.141505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.141854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.141888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.142229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.142258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.142633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.142661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.143036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.143064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.143431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.143459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.143822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.143850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.144213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.144242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.144603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.144631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.144994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.145020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.145259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.145288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.145668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.145695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.146063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.146091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.146467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.146495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.146929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.146957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.147214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.147244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.147628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.147656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.148024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.148052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.148426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.942 [2024-10-12 22:25:59.148455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.942 qpair failed and we were unable to recover it. 00:37:40.942 [2024-10-12 22:25:59.148821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.148849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.149214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.149243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.149519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.149546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.149946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.149974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.150371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.150400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.150778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.150805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.151152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.151181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.151517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.151546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.151917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.151944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.152295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.152325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.152689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.152716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.153080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.153116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.153383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.153410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.153762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.153789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.154171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.154200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.154577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.154605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.154973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.155001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.155348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.155377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.155743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.155772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.156136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.156164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.156521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.156548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.156907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.156934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.157307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.157342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.157704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.157731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.158095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.158134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.158485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.158513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.158856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.158885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.159172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.159201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.159558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.159586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.159962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.159989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.160346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.160377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.160745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.160774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.161136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.161166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.161523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.161551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.161787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.161815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.162078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.943 [2024-10-12 22:25:59.162116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.943 qpair failed and we were unable to recover it. 00:37:40.943 [2024-10-12 22:25:59.162522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.162550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.162903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.162932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.163266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.163295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.163705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.163736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.164088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.164125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.164499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.164526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.164896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.164923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.165287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.165315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.165683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.165710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.166051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.166079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.166484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.166515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.166731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.166760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.167121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.167151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.167507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.167536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.167896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.167923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.168268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.168298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.168659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.168686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.169048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.169076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.169349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.169378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.169722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.169749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.170143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.170173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.170514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.170543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.170885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.170914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.171280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.171309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.171674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.171701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.172067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.172096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.172435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.172472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.172812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.172841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.173098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.173140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.173502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.173530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.173896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.173923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.174311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.174341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.174728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.174755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.175124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.175155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.175519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.175547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.175923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.175950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.176326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.176356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.176688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.176716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.177081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.177119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.177457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.177486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.944 qpair failed and we were unable to recover it. 00:37:40.944 [2024-10-12 22:25:59.177828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.944 [2024-10-12 22:25:59.177857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.178226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.178256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.178624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.178652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.179016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.179044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.179401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.179431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.179829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.179857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.180212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.180241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.180598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.180626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.180988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.181015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.181386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.181415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.181670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.181697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.182045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.182073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.182453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.182483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.182844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.182873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.183215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.183246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.183614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.183642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.184007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.184036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.184221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.184254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.184501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.184530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.184836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.184866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.185224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.185253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.185630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.185658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.186019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.186046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.186412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.186442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.186806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.186834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.187164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.187195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.187552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.187586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.187952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.187980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.188348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.188378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.188709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.188737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.189089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.189130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.189477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.189505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.189912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.189939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.190284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.190314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.190647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.190674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.191025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.191054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.191404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.191433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.191785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.191813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.192179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.192208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.192566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.192596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.945 [2024-10-12 22:25:59.192940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.945 [2024-10-12 22:25:59.192968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.945 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.193324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.193354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.193719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.193749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.194116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.194145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.194448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.194475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.194840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.194868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.195234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.195264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.195636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.195664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.196023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.196051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.196415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.196445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.196802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.196831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.197076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.197115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.197461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.197490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.197859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.197888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.198253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.198283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.198654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.198683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.199047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.199075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.199450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.199479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.199841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.199871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.200239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.200268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.200610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.200639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.200918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.200946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.201322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.201351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.201710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.201740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.201971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.201999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.202411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.202442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.202803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.202837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.203174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.203204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.203570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.203600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.203959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.203986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.204363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.204392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.204640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.204671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.205032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.205062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.205468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.205497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.205852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.205879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.206247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.206278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.206641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.206669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.207038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.207066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.207309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.207343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.207724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.207753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.208127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.946 [2024-10-12 22:25:59.208157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.946 qpair failed and we were unable to recover it. 00:37:40.946 [2024-10-12 22:25:59.208518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.208548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.208911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.208940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.209341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.209372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.209735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.209763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.210115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.210145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.210375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.210402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.210651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.210685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.211037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.211064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.211477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.211510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.211797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.211825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.212192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.212221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.212560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.212589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.212949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.212980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.213247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.213276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.213544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.213571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.213935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.213966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.214337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.214366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.214725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.214754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.215127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.215158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.215520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.215549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.215922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.215949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.216287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.216315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.216524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.216556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.216932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.216958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.217332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.217362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.217723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.217757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.218125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.218154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.218513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.218541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.218906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.218936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.219293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.219323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.219587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.219614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.220024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.220052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.220420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.220449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.220803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.220830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.221179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.221209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.221574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.221601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.947 qpair failed and we were unable to recover it. 00:37:40.947 [2024-10-12 22:25:59.221964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.947 [2024-10-12 22:25:59.221991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.222357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.222386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.222726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.222754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.223133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.223163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.223392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.223423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.223766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.223795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.224157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.224186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.224553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.224580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.224964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.224991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.225425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.225455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.225795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.225824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.226193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.226221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.226583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.226610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.226987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.227014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.227376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.227405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.227771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.227798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.228158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.228187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.228560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.228587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.228951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.228978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.229387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.229416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.229780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.229808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.230173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.230204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.230579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.230607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.230977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.231004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.231384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.231412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.231788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.231816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.232174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.232203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.232610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.232637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.232980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.233009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.233389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.233424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.233788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.233815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.234173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.234203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.234547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.234576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.234924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.234951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.235307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.235337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.235701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.235729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.236095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.236135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.236492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.236522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.236870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.236898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.237254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.237285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.948 [2024-10-12 22:25:59.237623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.948 [2024-10-12 22:25:59.237650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.948 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.238014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.238042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.238335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.238364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.238743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.238771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.239134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.239162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.239523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.239551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.239914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.239942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.240322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.240350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.240726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.240754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.241121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.241150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.241512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.241539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.241911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.241938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.242313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.242342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.242701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.242728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.243087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.243125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.243559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.243587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.243922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.243951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.244287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.244316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.244691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.244719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.245083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.245118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.245462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.245489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.245835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.245864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.246230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.246260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.246613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.246641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.246880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.246911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.247263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.247292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.247632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.247660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.248029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.248056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.248421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.248450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.248823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.248857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.249212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.249241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.249614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.249641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.249885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.249913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.250271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.250300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.250668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.250695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.251062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.251090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.251337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.251369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.251744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.251772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.252006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.252033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.252414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.252443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.252808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.949 [2024-10-12 22:25:59.252835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.949 qpair failed and we were unable to recover it. 00:37:40.949 [2024-10-12 22:25:59.253209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.253238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.253600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.253627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.254031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.254059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.254469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.254497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.254857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.254884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.255300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.255330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.255656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.255684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.256070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.256097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.256472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.256501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.256859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.256886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.257243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.257272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.257634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.257661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.258101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.258139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.258516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.258545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.258898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.258925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.259293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.259324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.259683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.259711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.260074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.260101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.260500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.260529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.260888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.260915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.261290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.261319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.261588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.261616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.262018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.262047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.262423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.262454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.262825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.262853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.263209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.263240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.263599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.263627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.263982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.264011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.264381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.264418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.264774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.264803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.265137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.265166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.265541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.265568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.265906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.265933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.266327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.266357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.266724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.266753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.267134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.267163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.267586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.267615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.267962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.267991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.268340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.268369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.268741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.268769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.950 qpair failed and we were unable to recover it. 00:37:40.950 [2024-10-12 22:25:59.269127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.950 [2024-10-12 22:25:59.269156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.269389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.269417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.269776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.269805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.270173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.270201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.270578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.270606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.270970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.270998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.271423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.271452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.271807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.271835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.272181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.272211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.272578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.272607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.272972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.272999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.273336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.273366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.273742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.273770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.274137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.274167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.274439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.274467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.274831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.274859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.275217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.275248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.275504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.275535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.275874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.275903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.276282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.276311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.276668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.276695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.276958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.276986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.277334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.277364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.277728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.277756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.278121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.278151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.278524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.278551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.278916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.278944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.279198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.279227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.279651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.279686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.280026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.280055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.280444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.280474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.280842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.280871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.281239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.281268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.281675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.281703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.282062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.282090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.282462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.282489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.951 qpair failed and we were unable to recover it. 00:37:40.951 [2024-10-12 22:25:59.282931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.951 [2024-10-12 22:25:59.282958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.283207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.283236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.283510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.283538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.283913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.283940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.284374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.284403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.284761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.284789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.285052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.285081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.285502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.285530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.285863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.285892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.286141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.286171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.286543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.286572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.286925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.286953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.287404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.287433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.287693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.287721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.288094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.288134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.288363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.288391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.288768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.288796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.289180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.289209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.289583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.289611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.289973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.290003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.290247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.290280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.290537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.290564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.290915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.290943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.291322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.291351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.291756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.291785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.292160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.292190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.292571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.292599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.292967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.292995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.293357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.293386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.293735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.293763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.294127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.294157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.294394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.294423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.294800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.294834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.295199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.295228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.295596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.295624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.295993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.296020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.296383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.296413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.296621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.296653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.297041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.297070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.297433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.297462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.297817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.952 [2024-10-12 22:25:59.297845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.952 qpair failed and we were unable to recover it. 00:37:40.952 [2024-10-12 22:25:59.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.298242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.298607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.298635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.298995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.299023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.299393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.299423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.299655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.299686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.300051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.300080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.300296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.300325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.300672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.300701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.301077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.301114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.301449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.301478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.301830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.301860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.302225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.302254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.302628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.302657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.302865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.302893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.303277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.303306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.303561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.303588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.303858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.303886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.304247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.304277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.304652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.304681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.305041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.305072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.305461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.305490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.305829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.305859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.306119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.306148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.306528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.306556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.306918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.306945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.307334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.307363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.307720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.307748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.308126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.308155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.308525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.308553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.308899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.308927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.309171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.309199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.309568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.309603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.309960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.309989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.310335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.310364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.310727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.310755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.311124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.311154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.311513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.311542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.311918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.311947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.312338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.312368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.312615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.953 [2024-10-12 22:25:59.312643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.953 qpair failed and we were unable to recover it. 00:37:40.953 [2024-10-12 22:25:59.312902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.312929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.313165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.313194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.313571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.313599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.313859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.313888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.314244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.314273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.314639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.314668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.315137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.315168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.315525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.315553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.315895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.315923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.316197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.316234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.316625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.316653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.317015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.317043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.317259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.317288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.317663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.317691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.318062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.318091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.318439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.318468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.318834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.318864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.319239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.319269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.319646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.319676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.319901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.319930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.320329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.320360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.320727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.320755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.321122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.321152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.321399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.321426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.321778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.321807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.322168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.322198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.322574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.322601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.322968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.322996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.323444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.323474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.323846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.323874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.324211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.324241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.324614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.324642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.325008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.325038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.325398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.325427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.325794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.325821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.326186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.326214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.326627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.326656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.327014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.327043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.327307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.327337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.327797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.327825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.954 qpair failed and we were unable to recover it. 00:37:40.954 [2024-10-12 22:25:59.328159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.954 [2024-10-12 22:25:59.328190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.328581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.328608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.328972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.329002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.329371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.329401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.329765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.329794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.330161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.330190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.330556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.330585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.330982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.331010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.331425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.331455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.331797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.331825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.332172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.332202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.332573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.332602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.332975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.333003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.333414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.333444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.333838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.333867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.334121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.334150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.334510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.334540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.334767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.334795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.335170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.335205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.335592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.335621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.335984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.336012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.336269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.336300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.336557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.336585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.336925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.336952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.337325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.337354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.337707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.337736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.338122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.338151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.338417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.338448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.338788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.338817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.339173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.339203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.339635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.339662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.340029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.340058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.340314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.340345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.340722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.340750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.341124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.341153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.341515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.341542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.955 [2024-10-12 22:25:59.341983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.955 [2024-10-12 22:25:59.342012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.955 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.342350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.342382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.342714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.342742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.343139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.343169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.343539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.343568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.343817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.343845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.344084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.344123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.344478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.344507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.344867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.344895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.345260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.345289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.345654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.345686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.346029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.346058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.346416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.346446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.346714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.346741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.347091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.347131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.347371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.347398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.347545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.347573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.347956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.347986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.348352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.348382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.348627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.348658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.349028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.349056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.349315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.349343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.349719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.349755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.350087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.350126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.350464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.350494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.350828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.350855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.351238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.351268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.351495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.351524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.351800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.351830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.352161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.352191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.352473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.352501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.352849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.352878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.353245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.353275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.353651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.353680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.354055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.354085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.354444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.354473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.354733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.354761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.355116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.355145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.355494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.355522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.355884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.355911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.356275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.956 [2024-10-12 22:25:59.356305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.956 qpair failed and we were unable to recover it. 00:37:40.956 [2024-10-12 22:25:59.356752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.356781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.357137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.357166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.357515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.357544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.357916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.357945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.358289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.358319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.358759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.358787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.359147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.359176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.359443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.359472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.359827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.359856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.360220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.360250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.360616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.360644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.361015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.361044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.361381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.361411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.361777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.361805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.362179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.362208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.362468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.362495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.362873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.362901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.363266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.363295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.363666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.363695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.363942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.363970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.364317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.364347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.364715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.364750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.365124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.365154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.365367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.365395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.365714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.365741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.366013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.366041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.366395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.366425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.366799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.366827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.367194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.367223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.367624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.367652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.368014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.368042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.368314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.368344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.368748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.368776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.369132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.369162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.369404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.369432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.369681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.369708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.369965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.369993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.370363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.370392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.370647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.370675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.371010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.957 [2024-10-12 22:25:59.371038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.957 qpair failed and we were unable to recover it. 00:37:40.957 [2024-10-12 22:25:59.371385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.371415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.371675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.371703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.372078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.372117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.372457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.372485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.372848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.372877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.373235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.373265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.373635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.373663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.373921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.373948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.374335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.374365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.374742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.374770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.375132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.375160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.375401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.375429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.375700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.375728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.376095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.376150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.376516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.376545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.376916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.376944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.377300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.377331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.377698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.377726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.378139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.378168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.378440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.378468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.378819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.378847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.379193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.379230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.379625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.379654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.379936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.379963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.380329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.380358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:40.958 [2024-10-12 22:25:59.380738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.958 [2024-10-12 22:25:59.380767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:40.958 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.381014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.381047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.381313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.381346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.381693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.381725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.382095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.382151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.382533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.382562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.382817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.382844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.383186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.383216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.383482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.383509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.383860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.383888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.384302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.384331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.384629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.384656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.385038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.385067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.232 [2024-10-12 22:25:59.385485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.232 [2024-10-12 22:25:59.385515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.232 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.385954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.385982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.386357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.386385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.386754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.386785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.387137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.387167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.387413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.387442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.387887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.387917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.388162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.388191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.388567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.388596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.388970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.388998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.389371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.389401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.389745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.389774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.390156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.390186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.390566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.390594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.390963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.390991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.391350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.391379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.391746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.391775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.392143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.392172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.392501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.392530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.392804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.392832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.393203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.393234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.393580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.393607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.393948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.393977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.394229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.394265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.394659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.394688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.395051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.395079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.395476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.395505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.395875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.395903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.396301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.396331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.396710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.396738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.397139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.397168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.397527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.397557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.397934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.397963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.398342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.398372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.398777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.398805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.399138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.399168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.233 qpair failed and we were unable to recover it. 00:37:41.233 [2024-10-12 22:25:59.399525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.233 [2024-10-12 22:25:59.399553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.399801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.399833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.400197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.400228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.400575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.400604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.400984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.401012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.401360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.401390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.401649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.401677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.402034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.402063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.402421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.402451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.402827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.402855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.403231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.403259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.403651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.403678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.404045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.404073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.404457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.404486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.404859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.404901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.405258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.405288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.405724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.405751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.405993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.406024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.406371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.406400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.406759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.406787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.407161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.407189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.407548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.407577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.407904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.407934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.408310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.408341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.408686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.408715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.409082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.409137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.409547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.409575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.409933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.409969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.410332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.410362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.410732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.410761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.411135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.411167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.411540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.411568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.411809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.411839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.412084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.412124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.412470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.412497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.412869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.412897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.413269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.413298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.413550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.413578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.413930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.413958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.414342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.234 [2024-10-12 22:25:59.414371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.234 qpair failed and we were unable to recover it. 00:37:41.234 [2024-10-12 22:25:59.414730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.414758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.415141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.415172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.415549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.415578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.415939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.415965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.416398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.416427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.416756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.416786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.417144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.417173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.417572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.417600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.417855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.417883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.418231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.418261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.418632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.418660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.419022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.419050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.419421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.419451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.419813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.419842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.420256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.420286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.420636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.420665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.421025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.421054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.421388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.421417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.421762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.421791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.422158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.422187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.422412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.422443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.422798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.422826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.423201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.423231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.423654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.423682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.424044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.424071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.424467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.424496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.424863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.424891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.425271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.425316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.425693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.425720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.426096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.426139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.426511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.426539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.426904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.426933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.427279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.427309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.427672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.427701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.428052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.428079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.428494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.428522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.428867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.428895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.429263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.429293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.429654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.429681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.235 [2024-10-12 22:25:59.430052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.235 [2024-10-12 22:25:59.430080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.235 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.430469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.430498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.430865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.430895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.431245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.431274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.431536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.431563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.431918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.431945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.432303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.432334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.432704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.432732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.433100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.433142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.433474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.433502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.433885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.433913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.434276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.434305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.434677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.434705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.435044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.435073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.435428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.435457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.435816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.435845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.436212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.436242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.436614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.436641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.437006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.437034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.437388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.437417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.437777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.437808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.438170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.438199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.438557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.438588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.438953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.438981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.439334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.439365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.439727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.439755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.440120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.440149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.440573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.440601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.440972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.441006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.441363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.441393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.441738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.441766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.442129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.442159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.442522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.442549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.442922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.442950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.443304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.443333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.443698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.443727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.444169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.444200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.444539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.444578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.236 [2024-10-12 22:25:59.444901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.236 [2024-10-12 22:25:59.444929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.236 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.445290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.445321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.445679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.445707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.446073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.446100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.446585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.446614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.446977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.447006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.447355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.447385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.447633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.447661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.448019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.448047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.448429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.448457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.448826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.448855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.449225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.449255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.449609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.449639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.449999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.450027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.450364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.450394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.450758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.450787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.451148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.451179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.451538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.451567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.451939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.451970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.452341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.452370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.452596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.452624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.452991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.453019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.453405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.453434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.453795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.453824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.454197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.454226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.454567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.454596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.454960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.454988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.455356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.455384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.455762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.455791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.456151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.456180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.456531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.456564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.456916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.456945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.457287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.457318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.457679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.237 [2024-10-12 22:25:59.457707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.237 qpair failed and we were unable to recover it. 00:37:41.237 [2024-10-12 22:25:59.458076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.458113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.458485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.458513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.458883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.458910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.459157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.459189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.459440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.459468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.459826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.459856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.460217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.460247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.460616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.460644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.461007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.461035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.461401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.461430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.461786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.461814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.462185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.462214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.462476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.462505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.462867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.462895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.463270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.463299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.463661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.463689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.464062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.464092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.464465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.464494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.464729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.464762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.465127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.465160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.465517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.465546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.465906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.465934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.466283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.466312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.466669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.466698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.467056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.467086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.467476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.467505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.467878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.467906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.468270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.468299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.468664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.468692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.469026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.469054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.469417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.469448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.469809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.469837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.470199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.470228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.470632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.470661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.471019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.471047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.471413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.471441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.471784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.471817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.238 qpair failed and we were unable to recover it. 00:37:41.238 [2024-10-12 22:25:59.472176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.238 [2024-10-12 22:25:59.472206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.472581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.472610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.472987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.473016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.473352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.473382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.473743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.473772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.474131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.474161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.474533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.474563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.474923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.474951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.475332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.475362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.475719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.475747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.476118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.476149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.476499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.476528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.476878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.476907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.477280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.477313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.477660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.477690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.478069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.478100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.478448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.478478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.478833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.478862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.479225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.479256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.479621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.479649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.480047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.480075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.480505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.480534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.480889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.480918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.481274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.481304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.481556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.481585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.481939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.481971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.482337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.482368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.482731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.482760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.483132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.483163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.483542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.483573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.483936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.483965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.484327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.484357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.484728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.484757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.485130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.485159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.485528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.485558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.485917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.485947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.486288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.486318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.239 [2024-10-12 22:25:59.486670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.239 [2024-10-12 22:25:59.486698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.239 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.487061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.487090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.487508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.487544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.487881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.487915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.488280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.488311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.488689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.488718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.489077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.489114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.489449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.489477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.489853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.489882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.490248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.490277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.490638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.490667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.491028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.491058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.491422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.491451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.491836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.491864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.492226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.492256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.492497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.492527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.492886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.492915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.493269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.493298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.493670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.493698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.494065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.494093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.494443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.494476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.494851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.494880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.495244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.495275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.495710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.495739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.496098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.496141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.496494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.496522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.496873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.496900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.497264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.497294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.497664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.497692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.498055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.498086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.498455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.498484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.498845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.498873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.499236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.499266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.499633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.499660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.500023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.500056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.500494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.500523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.500881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.500911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.501277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.501307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.501544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.501571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.240 [2024-10-12 22:25:59.501918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.240 [2024-10-12 22:25:59.501947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.240 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.502192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.502220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.502467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.502498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.502788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.502825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.503219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.503250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.503611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.503641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.504013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.504042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.504312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.504341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.504562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.504592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.504943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.504984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.505326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.505358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.505725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.505752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.506022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.506050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.506497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.506528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.506874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.506903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.507258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.507287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.507667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.507695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.508173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.508203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.508548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.508576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.508954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.508981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.509361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.509390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.509773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.509801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.510142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.510171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.510554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.510583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.510961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.510989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.511355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.511385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.511746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.511776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.512139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.512169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.512429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.512456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.512733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.241 [2024-10-12 22:25:59.512763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.241 qpair failed and we were unable to recover it. 00:37:41.241 [2024-10-12 22:25:59.513016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.513044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.513401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.513431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.513771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.513799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.514148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.514178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.514544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.514572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.514915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.514944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.515293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.515322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.515664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.515693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.515906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.515934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.516305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.516334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.516762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.516791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.517216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.517245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.517610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.517639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.517899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.517934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.518264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.518292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.518640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.518668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.518913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.518944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.519400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.519430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.519739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.519767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.520128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.520158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.520529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.520557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.520912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.520940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.521220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.521248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.521627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.521656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.521998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.522033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.522395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.522424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.522773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.522800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.523240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.523270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.523639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.523666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.524045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.524074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.524442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.524472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.524832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.524861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.525220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.525257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.525604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.525633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.526009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.526037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.526406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.526435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.526874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.526902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.527287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.527316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.527689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.527717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.528082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.242 [2024-10-12 22:25:59.528120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.242 qpair failed and we were unable to recover it. 00:37:41.242 [2024-10-12 22:25:59.528463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.528497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.528861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.528889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.529267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.529296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.529662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.529690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.530050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.530078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.530446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.530474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.530714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.530745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.531131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.531161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.531494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.531523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.531868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.531896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.532275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.532304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.532675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.532703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.533131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.533162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.533500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.533528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.533769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.533800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.534160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.534190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.534558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.534588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.534839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.534867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.535223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.535261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.535636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.535665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.535933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.535960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.536251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.536280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.536650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.536679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.537037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.537067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.537435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.537466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.537712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.537740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.538062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.538090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.538468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.538498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.538756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.538783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.539013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.539041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.539403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.539433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.539798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.539827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.540206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.540234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.540604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.540632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.540991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.541019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.541425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.541453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.541814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.541845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.542219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.542248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.542619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.542646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.243 [2024-10-12 22:25:59.542906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.243 [2024-10-12 22:25:59.542933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.243 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.543285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.543327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.543689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.543717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.544087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.544140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.544527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.544557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.544927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.544955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.545374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.545404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.545767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.545795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.546178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.546210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.546577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.546605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.546967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.546996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.547365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.547394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.547746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.547775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.548051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.548078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.548472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.548509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.548868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.548897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.549158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.549186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.549502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.549532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.549901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.549931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.550272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.550301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.550645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.550674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.551038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.551067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.551478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.551509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.551847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.551877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.552244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.552275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.552639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.552667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.553025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.553052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.553424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.553453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.553821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.553851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.554219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.554247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.554626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.554655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.555020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.555049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.555424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.555453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.555814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.555843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.556209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.556247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.556580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.556608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.556970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.556999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.557261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.557289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.557627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.244 [2024-10-12 22:25:59.557655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.244 qpair failed and we were unable to recover it. 00:37:41.244 [2024-10-12 22:25:59.558001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.558032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.558290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.558319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.558553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.558587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.558971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.559000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.559352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.559382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.559614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.559645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.559998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.560027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.560187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.560219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.560596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.560625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.560866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.560893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.561273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.561302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.561674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.561703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.562071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.562099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.562451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.562480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.562927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.562956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.563203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.563232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.563588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.563617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.563991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.564019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.564373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.564404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.564771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.564801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.565165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.565195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.565573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.565603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.565852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.565879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.566269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.566298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.566640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.566670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.567023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.567052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.567433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.567464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.567717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.567745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.568093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.568148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.568530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.568560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.568920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.568948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.569299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.569329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.569705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.569733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.570115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.570145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.570398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.570427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.570765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.570793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.571024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.571055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.571458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.571488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.245 qpair failed and we were unable to recover it. 00:37:41.245 [2024-10-12 22:25:59.571864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.245 [2024-10-12 22:25:59.571894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.572273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.572302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.572670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.572700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.573067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.573096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.573488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.573524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.573894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.573923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.574283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.574314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.574551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.574579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.574847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.574876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.575228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.575257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.575634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.575662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.576021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.576049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.576426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.576456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.576896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.576923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.577268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.577297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.577671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.577699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.578064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.578092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.578530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.578558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.578809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.578837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.579183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.579213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.579589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.579618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.579869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.579901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.580262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.580293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.580654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.580683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.580934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.580962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.581220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.581249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.581630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.581658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.582026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.582055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.582451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.582480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.582843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.582870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.583231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.583260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.246 qpair failed and we were unable to recover it. 00:37:41.246 [2024-10-12 22:25:59.583619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.246 [2024-10-12 22:25:59.583648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.583906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.583933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.584273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.584305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.584688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.584716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.585080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.585120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.585481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.585509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.585878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.585906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.586277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.586306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.586648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.586678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.586934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.586962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.587308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.587337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.587769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.587798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.588169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.588198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.588579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.588613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.588982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.589009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.589349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.589380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.589727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.589756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.590129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.590159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.590315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.590348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.590719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.590747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.591136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.591168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.591638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.591667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.591940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.591968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.592322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.592352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.592690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.592720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.593015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.593043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.593460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.593488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.593741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.593769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.594147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.594177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.594546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.594575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.594918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.594946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.595328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.595357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.595727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.595755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.596124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.596155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.596553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.596582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.596948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.596975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.597345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.597374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.597743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.247 [2024-10-12 22:25:59.597772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.247 qpair failed and we were unable to recover it. 00:37:41.247 [2024-10-12 22:25:59.598137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.598167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.598556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.598587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.598943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.598973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.599329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.599359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.599717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.599745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.600101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.600140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.600435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.600463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.600891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.600920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.601264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.601294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.601658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.601686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.602055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.602083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.602443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.602472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.602907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.602936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.603270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.603301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.603647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.603674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.603917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.603951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.604327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.604357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.604608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.604635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.604990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.605018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.605328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.605359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.605727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.605756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.606124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.606154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.606524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.606552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.606914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.606942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.607304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.607333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.607698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.607726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.608071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.608101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.608478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.608507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.608810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.608838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.609210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.609241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.609618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.609646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.609988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.610017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.610392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.610422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.610773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.610801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.611160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.611189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.611586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.611614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.611861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.611892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.612245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.612276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.612633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.612662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.613070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.613098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.248 qpair failed and we were unable to recover it. 00:37:41.248 [2024-10-12 22:25:59.613396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.248 [2024-10-12 22:25:59.613424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.613768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.613795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.614134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.614165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.614535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.614563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.614929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.614958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.615376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.615405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.615731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.615760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.616137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.616168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.616516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.616545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.616889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.616919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.617262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.617293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.617627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.617655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.618017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.618045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.618424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.618454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.618834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.618862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.619227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.619271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.619634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.619663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.620029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.620058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.620306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.620335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.620689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.620723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.621092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.621131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.621506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.621534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.621907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.621934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.622289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.622319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.622684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.622713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.623070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.623099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.623529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.623559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.623912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.623941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.624175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.624205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.624574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.624604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.624971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.624998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.625337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.625368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.625739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.625767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.626128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.626158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.626527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.249 [2024-10-12 22:25:59.626558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.249 qpair failed and we were unable to recover it. 00:37:41.249 [2024-10-12 22:25:59.626990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.627018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.627351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.627379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.627618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.627646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.628018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.628048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.628411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.628440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.628814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.628843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.629185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.629215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.629583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.629612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.629974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.630001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.630368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.630397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.630752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.630783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.631123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.631152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.631531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.631559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.631796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.631826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.632173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.632203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.632554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.632583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.632948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.632978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.633332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.633362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.633721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.633748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.634122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.634151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.634529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.634564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.634925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.634953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.635294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.635322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.635692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.635721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.636079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.636116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.636491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.636519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.636880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.636907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.637273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.637302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.637650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.637678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.638043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.638072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.638367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.638397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.638761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.638789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.639168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.639197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.250 qpair failed and we were unable to recover it. 00:37:41.250 [2024-10-12 22:25:59.639560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.250 [2024-10-12 22:25:59.639587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.639967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.639996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.640347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.640377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.640720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.640748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.641122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.641152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.641520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.641548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.641907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.641936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.642290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.642319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.642680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.642708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.642945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.642978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.643342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.643371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.643690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.643717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.644087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.644125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.644373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.644401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.644760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.644791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.645152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.645182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.645539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.645567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.645929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.645956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.646193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.646222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.646578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.646607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.646974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.647005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.647374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.647404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.647772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.647800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.648187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.648217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.648573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.648600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.648961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.648990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.649405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.649436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.649775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.649809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.650099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.650141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.650524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.650551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.650913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.650941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.651199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.651231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.651575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.651603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.651963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.651993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.652411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.652440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.652802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.652830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.653196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.653224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.653584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.653612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.653980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.654008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.654364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.251 [2024-10-12 22:25:59.654392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.251 qpair failed and we were unable to recover it. 00:37:41.251 [2024-10-12 22:25:59.654763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.654791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.655158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.655187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.655529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.655558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.655920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.655948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.656338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.656369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.656709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.656738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.657124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.657153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.657530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.657557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.657915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.657944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.658313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.658342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.658521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.658549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.658936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.658964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.659191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.659223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.659590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.659617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.659989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.660018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.660449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.660478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.660837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.660866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.661208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.661238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.661616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.661643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.662015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.662043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.662422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.662452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.662822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.662850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.663211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.663241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.663609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.663639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.664002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.664029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.664398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.664427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.664787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.664815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.665179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.665213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.665571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.665599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.665969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.665999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.666357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.666386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.666756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.666785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.667156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.667185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.667544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.667571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.667913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.667942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.668293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.668322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.668700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.668728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.669091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.669144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.669605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.669633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.252 [2024-10-12 22:25:59.670043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.252 [2024-10-12 22:25:59.670072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.252 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.670462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.670492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.670850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.670878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.671241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.671272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.671635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.671662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.671925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.671955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.672363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.672392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.672622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.672653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.672818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.672845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.673097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.673140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.673489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.673518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.673888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.673916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.674279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.674307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.674685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.674711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.675074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.675116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.675481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.675511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.675854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.675883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.676243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.676272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.676644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.676672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.677143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.677174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.677599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.677628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.677994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.678022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.678364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.678394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.678762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.678789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.679117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.679149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.679505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.679533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.679898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.679926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.680366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.680396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.680734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.680769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.681130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.681159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.681556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.681583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.681958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.681986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.682347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.682377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.682808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.682835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.683023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.683050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.683463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.683492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.683854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.683883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.684263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.684293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.253 [2024-10-12 22:25:59.684557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.253 [2024-10-12 22:25:59.684588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.253 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.684838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.684867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.685212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.685242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.685616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.685643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.685902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.685933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.686188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.686220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.686600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.686629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.686992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.687020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.687386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.687415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.687777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.687806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.688169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.688200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.688597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.688625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.688976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.689003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.689347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.689376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.689737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.689765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.690134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.690162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.690553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.690581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.690958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.690990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.691242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.691273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.691509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.691541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.691894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.691920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.692288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.692318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.692676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.692704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.693068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.693096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.693471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.693499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.693870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.693898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.694268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.694296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.694642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.694670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.695034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.695061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.695503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.695535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.695780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.695820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.696164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.696194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.696584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.696612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.696975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.697003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.697376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.697406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.697810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.697838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.698084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.698125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.698505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.698533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.698889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.698917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.254 qpair failed and we were unable to recover it. 00:37:41.254 [2024-10-12 22:25:59.699288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.254 [2024-10-12 22:25:59.699316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.699667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.699695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.699947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.699974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.700308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.700340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.700704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.700733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.701096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.701138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.701497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.701524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.701894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.701922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.702332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.702361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.702701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.702731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.703116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.703145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.703491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.703526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.703848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.703876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.704241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.704271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.704611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.704639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.704977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.705008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.705373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.705404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.705773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.705800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.706167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.706197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.706551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.706578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.706829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.706859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.707101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.255 [2024-10-12 22:25:59.707142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.255 qpair failed and we were unable to recover it. 00:37:41.255 [2024-10-12 22:25:59.707516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.707544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.707994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.708025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.708432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.708463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.708816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.708845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.709228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.709257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.709631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.709661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.710079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.710130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.710502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.710533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.710757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.710785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.711032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.711071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.711339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.711368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.711730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.711758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.712011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.712038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.712412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.712442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.712781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.712810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.713185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.713213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.713590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.713618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.713959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.713989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.714334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.714363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.714715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.714743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.528 qpair failed and we were unable to recover it. 00:37:41.528 [2024-10-12 22:25:59.715075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.528 [2024-10-12 22:25:59.715114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.715471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.715499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.715864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.715892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.716285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.716315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.716678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.716705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.717139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.717168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.717500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.717528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.717893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.717921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.718285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.718315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.718698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.718727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.719165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.719196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.719570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.719598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.719956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.719985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.720353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.720383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.720623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.720652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.721022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.721052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.721428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.721465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.721848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.721876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.722368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.722397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.722829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.722856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.723227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.723260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.723627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.723656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.724027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.724054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.724473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.724503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.724869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.724898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.725269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.725300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.725558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.725588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.726023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.726051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.726483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.726513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.726876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.726903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.727145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.727174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.727552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.727580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.727941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.727970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.728327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.728356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.728720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.728747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.729117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.729146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.729511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.529 [2024-10-12 22:25:59.729539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.529 qpair failed and we were unable to recover it. 00:37:41.529 [2024-10-12 22:25:59.729899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.729927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.730169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.730199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.730452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.730479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.730847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.730875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.731217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.731247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.731619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.731647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.732016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.732044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.732498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.732527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.732883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.732912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.733354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.733383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.733717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.733746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.734136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.734167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.734548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.734576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.734925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.734954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.735312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.735342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.735710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.735738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.736113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.736143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.736502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.736531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.736906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.736935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.737169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.737208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.737576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.737605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.737961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.737989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.738360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.738389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.738744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.738772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.739151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.739182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.739557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.739594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.739966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.739994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.740334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.740364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.740617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.740645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.740899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.740927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.741238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.741269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.741509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.741539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.741886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.741916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.742279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.742309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.742664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.742692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.743037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.743065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.743446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.743477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.743839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.743867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.744236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.744265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.530 qpair failed and we were unable to recover it. 00:37:41.530 [2024-10-12 22:25:59.744678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.530 [2024-10-12 22:25:59.744706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.745062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.745090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.745460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.745488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.745847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.745875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.746240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.746271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.746634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.746663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.747026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.747055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.747420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.747449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.747840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.747868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.748231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.748261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.748634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.748664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.749089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.749129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.749512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.749540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.749899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.749927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.750289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.750320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.750685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.750713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.751118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.751150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.751520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.751548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.751806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.751837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.752213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.752243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.752610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.752644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.752892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.752921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.753255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.753284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.753663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.753691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.754054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.754082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.754440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.754469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.754832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.754859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.755205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.755235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.755581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.755611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.755975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.756003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.756344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.756375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.756752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.756780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.757121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.757152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.757504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.757533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.757899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.757929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.758295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.758326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.758701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.758729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.759092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.759134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.759502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.759532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.759888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.531 [2024-10-12 22:25:59.759916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.531 qpair failed and we were unable to recover it. 00:37:41.531 [2024-10-12 22:25:59.760170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.760204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.760612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.760640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.760998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.761026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.761393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.761421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.761787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.761815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.762184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.762215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.762568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.762598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.762973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.763002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.763344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.763373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.763623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.763651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.763902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.763929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.764288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.764317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.764694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.764723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.765064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.765091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.765489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.765517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.765879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.765906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.766247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.766277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.766704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.766732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.767090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.767132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.767379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.767407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.767773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.767809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.768171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.768201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.768456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.768486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.768860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.768888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.769231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.769262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.769635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.769663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.770025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.770053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.770421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.770450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.770786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.770814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.771162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.771193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.771437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.771468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.771825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.771853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.772214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.532 [2024-10-12 22:25:59.772243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.532 qpair failed and we were unable to recover it. 00:37:41.532 [2024-10-12 22:25:59.772604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.772632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.773002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.773030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.773409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.773437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.773810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.773841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.774199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.774230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.774648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.774677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.775021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.775049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.775389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.775418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.775781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.775809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.776241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.776272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.776640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.776667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.777029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.777058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.777196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.777228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.777541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.777568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.777936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.777967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.778341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.778371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.778742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.778770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.779142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.779172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.779536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.779564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.779926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.779954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.780336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.780366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.780744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.780773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.781137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.781166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.781612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.781640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.781989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.782017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.782388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.782417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.782855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.782886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.783142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.783178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.783562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.783591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.783945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.783974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.784323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.784354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.784721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.784751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.785121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.785154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.785523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.785553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.785953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.785982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.786334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.786364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.786722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.786752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.533 [2024-10-12 22:25:59.786999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.533 [2024-10-12 22:25:59.787029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.533 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.787268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.787299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.787535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.787567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.787920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.787949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.788311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.788345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.788711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.788741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.789118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.789148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.789538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.789568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.789941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.789972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.790308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.790338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.790700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.790730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.791081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.791120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.791530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.791559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.791928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.791956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.792299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.792331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.792700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.792729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.793128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.793159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.793527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.793557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.793954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.793983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.794338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.794368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.794727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.794756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.795146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.795175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.795510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.795539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.795914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.795941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.796094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.796138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.796564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.796595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.796969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.796997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.797270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.797298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.797552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.797580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.797934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.797961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.798349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.798385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.798746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.798784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.799125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.799155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.799513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.799542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.799811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.799838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.800174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.800202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.800547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.800576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.800887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.800924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.801286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.801315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.534 qpair failed and we were unable to recover it. 00:37:41.534 [2024-10-12 22:25:59.801589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.534 [2024-10-12 22:25:59.801618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.801874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.801904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.802198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.802227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.802603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.802630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.802989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.803017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.803365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.803393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.803760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.803790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.804165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.804194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.804577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.804605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.804853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.804880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.805239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.805269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.805635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.805664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.806037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.806069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.806443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.806474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.806839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.806868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.807237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.807265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.807647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.807674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.808040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.808070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.808540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.808569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.808835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.808862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.809227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.809256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.809635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.809662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.810033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.810060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.810446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.810476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.810861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.810892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.811154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.811184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.811529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.811556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.811997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.812025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.812338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.812367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.812622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.812649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.813001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.813037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.813471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.813508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.813843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.813873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.814232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.814261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.814631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.814658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.815024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.815054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.815446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.815477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.815717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.815747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.816123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.535 [2024-10-12 22:25:59.816151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.535 qpair failed and we were unable to recover it. 00:37:41.535 [2024-10-12 22:25:59.816404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.816433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.816800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.816828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.817191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.817223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.817601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.817629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.817873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.817900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.818251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.818279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.818641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.818672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.819024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.819052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.819304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.819335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.819716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.819747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.820117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.820147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.820506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.820533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.820899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.820927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.821175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.821206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.821546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.821573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.821947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.821978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.822360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.822391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.822753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.822782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.823149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.823179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.823637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.823665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.824031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.824061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.824510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.824540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.824916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.824943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.825294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.825323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.825685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.825714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.826092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.826139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.826381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.826409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.826763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.826791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.827051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.827079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.827474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.827504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.827868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.827895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.828269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.828299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.828664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.828701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.829062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.829090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.829469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.829498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.829863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.829892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.830159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.830189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.830522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.830550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.830923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.536 [2024-10-12 22:25:59.830953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.536 qpair failed and we were unable to recover it. 00:37:41.536 [2024-10-12 22:25:59.831305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.831338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.831688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.831717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.831980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.832007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.832332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.832361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.832721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.832750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.833123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.833153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.833538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.833567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.833867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.833896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.834249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.834278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.834653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.834682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.835037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.835066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.835418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.835448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.835817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.835845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.836210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.836239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.836622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.836651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.837007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.837036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.837391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.837421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.837735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.837774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.838140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.838171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.838610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.838639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.838907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.838935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.839282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.839310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.839578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.839606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.839943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.839973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.840373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.840403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.840767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.840795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.841149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.841178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.841534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.841563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.841931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.841959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.842374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.842405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.842667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.537 [2024-10-12 22:25:59.842694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.537 qpair failed and we were unable to recover it. 00:37:41.537 [2024-10-12 22:25:59.843052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.843080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.843417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.843446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.843808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.843842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.844205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.844235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.844617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.844646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.844901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.844928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.845263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.845292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.845545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.845574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.845989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.846017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.846297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.846326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.846707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.846737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.847101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.847142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.847481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.847509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.847879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.847907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.848281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.848310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.848720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.848748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.849114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.849146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.849528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.849556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.849806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.849833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.850187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.850217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.850551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.850579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.850952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.850979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.851367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.851397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.851765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.851794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.852162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.852192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.852571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.852598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.852968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.852995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.853275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.853305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.853682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.853711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.854086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.854128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.854476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.854506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.854885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.854914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.855275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.855305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.855547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.855574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.855927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.855956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.856251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.856280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.856541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.856568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.856931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.856960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.538 qpair failed and we were unable to recover it. 00:37:41.538 [2024-10-12 22:25:59.857354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.538 [2024-10-12 22:25:59.857384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.857746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.857773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.858142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.858174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.858550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.858578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.858829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.858864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.859221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.859251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.859588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.859618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.859976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.860003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.860360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.860392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.860832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.860861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.861231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.861260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.861626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.861653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.862004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.862032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.862246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.862276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.862661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.862689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.863133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.863163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.863514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.863543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.863912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.863940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.864310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.864341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.864711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.864739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.865138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.865168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.865542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.865569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.865809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.865836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.866264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.866298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.866425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.866454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.866837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.866865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.867243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.867274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.867655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.867684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.868045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.868075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.868433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.868462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.868886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.868914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.869252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.869283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.869651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.869680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.870071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.870100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.870464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.870493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.870867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.870894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.871332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.871362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.871706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.871735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.539 [2024-10-12 22:25:59.872121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.539 [2024-10-12 22:25:59.872150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.539 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.872528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.872555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.872900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.872928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.873288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.873319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.873691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.873719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.874087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.874129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.874492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.874527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.874877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.874905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.875280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.875310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.875683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.875710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.876072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.876114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.876473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.876501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.876868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.876897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.877160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.877188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.877602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.877630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.878007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.878035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.878391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.878421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.878787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.878815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.879181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.879212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.879543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.879570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.879794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.879822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.880208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.880238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.880608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.880637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.880995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.881024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.881377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.881406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.881763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.881793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.882160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.882189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.882557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.882585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.882952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.882980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.883349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.883379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.883747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.883774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.884152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.884181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.884561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.884589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.884863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.884892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.885130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.885163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.885402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.885430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.885788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.885816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.886162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.886191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.886562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.886591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.886934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.886961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.540 [2024-10-12 22:25:59.887221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.540 [2024-10-12 22:25:59.887250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.540 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.887652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.887680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.888051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.888080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.888424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.888453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.888830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.888859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.889226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.889254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.889594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.889629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.889978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.890006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.890362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.890391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.890723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.890752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.891162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.891195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.891577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.891605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.891963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.891991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.892238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.892268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.892584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.892612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.892972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.892999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.893431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.893460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.893818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.893849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.894227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.894257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.894513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.894540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.894964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.894993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.895382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.895412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.895669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.895696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.896092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.896133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.896535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.896563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.896931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.896958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.897299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.897328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.897700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.897729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.898136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.898168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.898413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.898441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.898803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.898831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.899192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.899223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.899500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.899528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.899902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.899932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.900291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.900321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.900567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.900599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.541 qpair failed and we were unable to recover it. 00:37:41.541 [2024-10-12 22:25:59.900997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.541 [2024-10-12 22:25:59.901025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.901396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.901425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.901775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.901803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.902174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.902204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.902555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.902584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.902958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.902986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.903352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.903382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.903634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.903665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.903926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.903954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.904415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.904444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.904800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.904836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.905188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.905218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.905612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.905642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.906016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.906046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.906409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.906439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.906805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.906834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.907199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.907230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.907578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.907606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.907978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.908005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.908411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.908439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.908798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.908825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.909169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.909199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.909581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.909609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.909971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.910007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.910378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.910408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.910775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.910802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.911057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.911088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.911359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.911392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.911732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.911760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.912168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.912198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.912541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.912577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.912928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.912956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.913298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.913327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.913716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.913745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.914130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.914162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.914513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.914543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.914867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.542 [2024-10-12 22:25:59.914895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.542 qpair failed and we were unable to recover it. 00:37:41.542 [2024-10-12 22:25:59.915264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.915300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.915677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.915705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.915962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.915990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.916373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.916404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.916759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.916787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.917151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.917180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.917555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.917584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.917970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.917998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.918380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.918411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.918770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.918799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.919173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.919202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.919629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.919657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.919987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.920016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.920395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.920424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.920784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.920813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.921187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.921218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.921499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.921526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.921884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.921912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.922257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.922287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.922663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.922690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.923044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.923072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.923407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.923438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.923806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.923834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.924187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.924218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.924594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.924620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.924983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.925011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.925364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.925393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.925753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.925783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.926140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.926171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.926531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.926561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.926917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.926945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.927330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.927360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.927708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.927737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.928097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.928159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.928506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.928534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.928897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.928924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.929285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.929314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.929711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.929739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.930064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.930094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.543 [2024-10-12 22:25:59.930477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.543 [2024-10-12 22:25:59.930508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.543 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.930861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.930895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.931255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.931285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.931598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.931627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.932029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.932057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.932427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.932456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.932818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.932849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.933014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.933043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.933415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.933445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.933802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.933830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.934219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.934248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.934564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.934592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.934968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.934997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.935240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.935268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.935616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.935646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.936001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.936030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.936365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.936394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.936640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.936668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.936923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.936953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.937289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.937318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.937693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.937721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.938062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.938090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.938488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.938518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.938875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.938902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.939258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.939287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.939652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.939681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.939930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.939957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.940288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.940317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.940684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.940714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.941081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.941121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.941492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.941521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.941899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.941929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.942189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.942217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.942587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.942615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.942967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.942996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.943360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.943389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.943758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.943788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.944163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.944194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.944575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.544 [2024-10-12 22:25:59.944603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.544 qpair failed and we were unable to recover it. 00:37:41.544 [2024-10-12 22:25:59.944963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.944990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.945353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.945381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.945745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.945779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.946148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.946180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.946565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.946593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.946948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.946975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.947339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.947370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.947717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.947745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.948007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.948034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.948395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.948424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.948794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.948824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.949184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.949214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.949584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.949611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.949976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.950003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.950360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.950389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.950756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.950784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.951154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.951183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.951540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.951568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.951818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.951845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.952197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.952226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.952598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.952625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.952981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.953009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.953287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.953317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.953683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.953712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.954124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.954154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.954496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.954525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.954893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.954921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.955283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.955314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.955675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.955703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.956066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.956095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.956467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.956495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.956869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.956897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.957259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.957289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.957638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.957666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.958031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.958060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.958439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.958470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.958826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.958854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.959234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.959264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.959630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.959657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.545 qpair failed and we were unable to recover it. 00:37:41.545 [2024-10-12 22:25:59.960024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.545 [2024-10-12 22:25:59.960054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.960407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.960438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.960779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.960807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.961172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.961207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.961572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.961600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.961964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.961991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.962339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.962371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.962781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.962809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.963172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.963202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.963577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.963605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.963969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.963997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.964355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.964385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.964744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.964773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.965141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.965170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.965533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.965561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.965943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.965970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.966347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.966375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.966826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.966855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.967215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.967246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.967618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.967646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.967997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.968025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.968417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.968446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.968809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.968836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.969206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.969236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.969590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.969618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.969982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.970009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.970375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.970403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.970743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.970770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.971136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.971166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.971528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.971558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.971933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.971962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.972375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.972403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.972750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.972778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.973129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.973157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.973507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.973536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.546 qpair failed and we were unable to recover it. 00:37:41.546 [2024-10-12 22:25:59.973905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.546 [2024-10-12 22:25:59.973935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.974226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.974255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.974402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.974429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.974814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.974842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.975188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.975218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.975584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.975612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.975988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.976015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.976410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.976438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.976807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.976841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.977198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.977228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.977484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.977514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.977917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.977945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.978286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.978315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.978689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.978717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.979077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.979118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.979487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.979514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.979771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.979798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.980156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.980187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.980558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.980585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.981001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.981029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.981383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.981414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.981741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.981768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.982124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.982154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.982552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.982581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.983007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.983037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.983376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.983406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.983767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.983796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.984172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.984202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.984561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.984597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.984952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.984979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.985327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.985356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.985732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.985762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.986127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.986155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.986604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.986632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.986996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.987024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.987394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.987423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.987794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.987823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.988181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.988211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.547 [2024-10-12 22:25:59.988547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.547 [2024-10-12 22:25:59.988575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.547 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.988938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.988966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.989206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.989238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.989610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.989638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.989998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.990028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.990286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.990315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.990582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.990609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.990859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.990887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.991257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.991286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.991653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.991681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.992039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.992074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.992469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.992498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.992871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.992899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.993145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.993174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.993530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.993557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.993920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.993948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.994339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.994367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.994727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.994755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.995124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.995155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.995525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.995553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.995785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.995815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.996182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.996213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.996572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.996600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.996842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.996870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.997253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.997285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.997686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.997715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.998073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.998114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.998499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.998529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.998911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.998938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.999205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.999235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.999479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.999507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:25:59.999942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:25:59.999972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:26:00.000318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:26:00.000348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.548 [2024-10-12 22:26:00.000718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.548 [2024-10-12 22:26:00.000746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.548 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.001126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.001158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.001458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.001490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.001847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.001875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.002750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.002791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.003227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.003262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.003436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.003465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.003720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.003749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.549 [2024-10-12 22:26:00.004007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.549 [2024-10-12 22:26:00.004035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.549 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.004307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.004340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.004605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.004636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.004789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.004816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.005218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.005248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.005530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.005560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.005939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.005967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.006231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.006260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.006594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.006623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.006976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.007015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.007249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.007279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.007604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.007633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.007979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.008009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.008370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.008401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.008739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.008768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.009034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.009064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.009232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.009265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.009654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.009681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.009918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.009950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.010460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.010491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.010881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.010911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.011206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.821 [2024-10-12 22:26:00.011234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.821 qpair failed and we were unable to recover it. 00:37:41.821 [2024-10-12 22:26:00.011613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.011643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.012017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.012046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.012322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.012352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.012630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.012659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.013034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.013064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.013461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.013491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.013856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.013886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.014157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.014188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.014572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.014600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.014964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.014992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.015329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.015358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.015722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.015749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.016113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.016143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.016465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.016495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.016848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.016877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.017244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.017274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.017643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.017670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.018041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.018071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.018459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.018491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.018864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.018894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.019257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.019288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.019562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.019589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.019947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.019976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.020329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.020359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.020721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.020751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.021116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.021146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.021375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.021404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.021730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.021766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.022160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.022189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.022551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.022579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.022945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.022973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.023246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.023277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.023635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.023666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.023888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.023915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.024190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.024219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.024517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.024547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.024966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.024994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.025379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.822 [2024-10-12 22:26:00.025409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.822 qpair failed and we were unable to recover it. 00:37:41.822 [2024-10-12 22:26:00.025741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.025770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.026053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.026083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.026391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.026421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.026762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.026792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.027090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.027135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.027499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.027528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.028007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.028035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.028403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.028434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.028810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.028839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.029181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.029218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.029590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.029618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.030000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.030029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.030301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.030331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.030768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.030796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.031050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.031078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.031474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.031504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.031864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.031893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.032187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.032217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.032594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.032622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.032899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.032928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.033307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.033339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.033756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.033785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.034189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.034219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.034549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.034577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.034838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.034867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.035207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.035238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.035551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.035579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.035939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.035968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.036228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.036257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.036553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.036587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.036920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.036948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.037329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.037359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.037741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.037772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.038136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.038166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.038423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.038451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.038795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.038823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.039239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.039269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.039637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.823 [2024-10-12 22:26:00.039666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.823 qpair failed and we were unable to recover it. 00:37:41.823 [2024-10-12 22:26:00.039928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.039957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.040381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.040412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.040568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.040595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.040975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.041003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.041376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.041408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.041657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.041685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.042056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.042085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.042352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.042384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.042795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.042823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.043201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.043230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.043607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.043635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.043977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.044005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.044347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.044377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.044734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.044765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.045154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.045183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.045435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.045462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.045813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.045841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.046060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.046088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.046379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.046409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.046772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.046801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.047186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.047217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.047566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.047596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.047954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.047982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.048294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.048324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.048693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.048721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.048966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.048994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.049363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.049393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.049775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.049804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.050062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.050091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.050377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.050406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.050623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.050652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.051019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.051054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.051411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.051441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.051798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.051830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.824 [2024-10-12 22:26:00.052247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.824 [2024-10-12 22:26:00.052277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.824 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.052605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.052632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.053051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.053079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.053337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.053365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.053733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.053761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.054002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.054031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.054254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.054283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.054671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.054699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.054964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.054999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.055367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.055400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.055747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.055776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.056149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.056182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.056575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.056604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.056981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.057009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.057270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.057302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.057560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.057589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.057838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.057866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.058144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.058174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.058598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.058628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.058991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.059021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.059383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.059413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.059776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.059804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.060178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.060207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.060627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.060656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.061029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.061059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.061412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.061442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.061814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.061843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.062206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.062235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.062502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.062531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.062945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.062973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.063245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.063273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.063644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.063674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.064099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.064143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.064515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.064544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.064838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.064866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.065212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.065242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.065505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.065534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.825 [2024-10-12 22:26:00.065810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.825 [2024-10-12 22:26:00.065848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.825 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.066129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.066160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.066556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.066585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.066953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.066986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.067362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.067391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.067648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.067676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.068064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.068093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.068378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.068407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.068754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.068782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.069053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.069080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.069490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.069520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.069772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.069800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.070024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.070051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.070444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.070473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.070873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.070904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.071270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.071301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.071658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.071686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.072071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.072101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.072394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.072422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.072750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.072786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.073157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.073189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.073553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.073581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.073961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.073989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.074274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.074303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.074584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.074613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.074968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.074999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.075244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.075275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.075673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.075702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.075966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.075994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.076357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.076387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.076746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.076775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.077011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.077039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.077430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.077459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.826 qpair failed and we were unable to recover it. 00:37:41.826 [2024-10-12 22:26:00.077829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.826 [2024-10-12 22:26:00.077858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.078228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.078257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.078635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.078664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.079013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.079041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.079307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.079336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.079701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.079730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.080066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.080096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.080489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.080526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.080887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.080915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.081281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.081310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.081686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.081715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.081955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.081987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.082377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.082410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.082762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.082793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.083038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.083068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.083333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.083366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.083651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.083679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.084048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.084078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.084332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.084364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.084702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.084731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.085131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.085161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.085528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.085557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.085924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.085953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.086189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.086219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.086574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.086605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.086851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.086880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.087252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.087282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.087535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.087564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.087921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.087950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.088207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.088236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.088626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.088655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.089036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.089066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.089433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.089463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.089728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.089756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.090135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.090167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.090534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.090562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.827 qpair failed and we were unable to recover it. 00:37:41.827 [2024-10-12 22:26:00.090924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.827 [2024-10-12 22:26:00.090951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.091331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.091361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.091583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.091611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.091992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.092022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.092281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.092310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.092663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.092691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.092919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.092947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.093317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.093347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.093704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.093734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.094083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.094127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.094386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.094418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.094805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.094839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.095129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.095159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.095460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.095488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.095922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.095952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.096336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.096365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.096741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.096769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.097139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.097168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.097422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.097449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.097818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.097847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.098227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.098255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.098660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.098691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.099050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.099079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.099468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.099497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.099851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.099879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.100244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.100277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.100627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.100656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.101031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.101061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.101414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.101445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.101707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.101739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.102130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.102160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.102525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.102554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.102971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.103002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.103361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.103391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.103753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.103782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.104151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.104181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.104539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.104567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.828 qpair failed and we were unable to recover it. 00:37:41.828 [2024-10-12 22:26:00.104916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.828 [2024-10-12 22:26:00.104944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.105208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.105238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.105613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.105644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.106009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.106037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.106425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.106454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.106695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.106724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.107078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.107117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.107478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.107506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.107849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.107879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.108216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.108246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.108510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.108538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.108930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.108959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.109355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.109384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.109764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.109791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.110161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.110191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.110563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.110592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.110950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.110978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.111322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.111352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.111695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.111724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.112167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.112198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.112428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.112461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.112813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.112842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.113231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.113261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.113630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.113658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.114009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.114037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.114417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.114448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.114810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.114839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.115189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.115219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.115592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.115621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.115978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.116006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.116364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.116394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.116633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.116664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.117095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.117138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.117418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.117446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.117842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.117871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.118243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.118273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.118634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.118662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.119024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.119053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.119410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.119440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.119813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.119842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.829 qpair failed and we were unable to recover it. 00:37:41.829 [2024-10-12 22:26:00.120181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.829 [2024-10-12 22:26:00.120211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.120583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.120618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.120985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.121013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.121410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.121440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.121827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.121857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.122185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.122214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.122597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.122625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.122987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.123015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.123441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.123472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.123753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.123780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.124024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.124052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.124339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.124369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.124742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.124770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.125046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.125073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.125466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.125497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.125862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.125892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.126262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.126293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.126671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.126699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.127060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.127089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.127472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.127501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.127875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.127903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.128166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.128195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.128454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.128483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.128867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.128897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.129261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.129291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.129653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.129683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.130086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.130140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.130476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.130505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.130881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.130909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.131262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.131292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.133373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.133439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.133724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.133759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.830 [2024-10-12 22:26:00.133949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.830 [2024-10-12 22:26:00.133979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.830 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.134228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.134259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.134554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.134582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.134840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.134869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.135287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.135317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.135601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.135629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.135796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.135825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.136145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.136176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.136477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.136506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.136667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.136710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.136970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.136999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.137387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.137418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.137649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.137677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.137828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.137855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.138026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.138054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.138260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.138291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.138666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.138695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.139042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.139073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.139278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.139308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.139670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.139697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.139982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.140010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.140397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.140427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.140651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.140679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.141052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.141082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.141454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.141484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.141802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.141829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.142208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.142238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.142618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.142646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.143027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.143055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.143438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.143470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.143846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.143876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.144245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.144275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.144539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.144568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.144954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.144982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.145357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.145386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.145770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.145798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.146159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.146189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.146569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.146597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.146855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.146884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.147266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.147295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.831 qpair failed and we were unable to recover it. 00:37:41.831 [2024-10-12 22:26:00.147652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.831 [2024-10-12 22:26:00.147680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.148049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.148078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.148454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.148483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.148734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.148761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.149175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.149209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.149458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.149485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.149728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.149757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.150118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.150148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.150494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.150524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.150885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.150921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.151274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.151304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.151672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.151701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.152057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.152085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.152338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.152365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.152731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.152761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.153132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.153162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.153425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.153453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.153806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.153835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.154086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.154140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.154477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.154505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.154727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.154754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.155129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.155159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.155516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.155546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.155876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.155905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.156267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.156297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.156657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.156686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.157050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.157078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.157458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.157488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.157850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.157880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.158301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.158332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.158682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.158710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.159079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.159119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.159369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.159397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.159770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.159798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.160065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.832 [2024-10-12 22:26:00.160094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.832 qpair failed and we were unable to recover it. 00:37:41.832 [2024-10-12 22:26:00.160496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.160524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.160934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.160963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.161389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.161419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.161783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.161821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.162161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.162194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.162430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.162459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.162839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.162868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.163235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.163264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.163630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.163657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.164011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.164040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.164445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.164474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.164850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.164879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.165250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.165279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.165643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.165671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.166043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.166077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.166470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.166500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.166861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.166889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.167127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.167158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.167428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.167460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.167800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.167828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.168198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.168228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.168554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.168582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.168895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.168931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.169265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.169298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.169540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.169567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.169948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.169977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.170249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.170278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.170539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.170571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.170920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.170950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.171202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.171231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.171594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.171624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.171969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.171997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.172344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.172374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.172615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.172644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.173003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.173031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.173269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.173297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.173684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.173712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.174076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.174118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.174488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.174518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.174886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.833 [2024-10-12 22:26:00.174915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.833 qpair failed and we were unable to recover it. 00:37:41.833 [2024-10-12 22:26:00.175281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.175311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.175684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.175713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.176083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.176125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.176536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.176564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.176928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.176958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.177337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.177367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.177616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.177644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.178021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.178049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.178440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.178470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.178906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.178936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.179285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.179314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.179680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.179709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.180076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.180114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.180472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.180500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.180762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.180798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.181233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.181265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.181619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.181647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.182016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.182046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.182304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.182334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.182702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.182731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.183095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.183136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.183460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.183488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.183863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.183895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.184261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.184290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.184640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.184676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.185006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.185035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.185308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.185337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.185688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.185716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.185957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.185986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.186336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.186366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.186741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.186769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.187137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.187166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.187582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.187610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.187847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.187874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.188266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.188298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.188658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.188688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.189054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.189082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.189460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.189490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.189828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.189855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.834 [2024-10-12 22:26:00.190217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.834 [2024-10-12 22:26:00.190248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.834 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.190598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.190627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.190882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.190911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.191283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.191312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.191664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.191692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.192055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.192084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.192464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.192492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.192801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.192830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.193263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.193294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.193642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.193670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.194036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.194064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.194442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.194472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.194896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.194923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.195282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.195313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.195702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.195731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.196097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.196144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.196547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.196575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.196936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.196964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.197324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.197354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.197711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.197741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.198065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.198093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.198452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.198481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.198772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.198799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.199146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.199175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.199582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.199610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.200038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.200068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.200447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.200477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.200839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.200866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.201233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.201262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.201592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.201620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.201978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.202007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.202365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.202396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.202737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.202764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.203140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.203171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.203502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.835 [2024-10-12 22:26:00.203531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.835 qpair failed and we were unable to recover it. 00:37:41.835 [2024-10-12 22:26:00.203903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.203931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.204319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.204348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.204710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.204738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.205117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.205147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.205476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.205509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.205863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.205893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.206242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.206272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.206607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.206635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.207005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.207035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.207418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.207448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.207807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.207834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.208198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.208228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.208573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.208600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.208966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.208994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.209246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.209278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.209639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.209668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.210074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.210101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.210535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.210563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.210821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.210849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.211223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.211253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.211622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.211658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.212092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.212143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.212521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.212548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.212903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.212930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.213288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.213317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.213697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.213725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.213974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.214003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.214350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.214379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.214743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.214772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.215164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.215193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.215542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.215569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.215946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.215974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.216334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.216364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.216728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.216756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.217095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.217146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.217550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.217580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.836 [2024-10-12 22:26:00.217930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.836 [2024-10-12 22:26:00.217959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.836 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.218327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.218359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.218730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.218758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.219134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.219164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.219541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.219570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.219933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.219960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.220332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.220361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.220709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.220737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.221047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.221075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.221450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.221479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.221843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.221870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.222231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.222261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.222632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.222660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.223019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.223049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.223409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.223440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.223797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.223827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.224194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.224224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.224612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.224640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.224887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.224915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.225184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.225213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.225590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.225620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.226025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.226053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.226337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.226366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.226759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.226787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.227148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.227184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.227539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.227566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.227934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.227964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.228335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.228365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.228760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.228788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.229151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.229572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.229600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.229856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.229884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.230148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.230180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.230531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.230560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.230933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.230961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.231346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.231376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.231714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.231742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.232118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.232148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.232518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.232551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.837 qpair failed and we were unable to recover it. 00:37:41.837 [2024-10-12 22:26:00.232893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.837 [2024-10-12 22:26:00.232922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.233294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.233324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.233686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.233713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.234088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.234128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.234490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.234519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.234886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.234915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.235390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.235420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.235783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.235811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.236190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.236218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.236614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.236646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.237029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.237057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.237435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.237464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.237722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.237751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.238138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.238169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.238507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.238536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.238913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.238941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.239202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.239234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.239604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.239633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.239997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.240026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.240369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.240399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.240764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.240794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.241163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.241193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.241555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.241585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.241977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.242007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.242260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.242289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.242581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.242615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.242997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.243025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.243381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.243412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.243775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.243803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.244151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.244181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.244572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.244600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.245043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.245071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.245408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.245437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.245792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.245822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.246196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.246227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.246586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.246613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.246984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.247012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.247363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.247394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.247753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.247781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.248130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.248159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.838 qpair failed and we were unable to recover it. 00:37:41.838 [2024-10-12 22:26:00.248530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.838 [2024-10-12 22:26:00.248559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.248868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.248903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.249258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.249287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.249658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.249688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.250046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.250075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.250383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.250414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.250794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.250822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.251176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.251205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.251566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.251594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.251945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.251974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.252280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.252309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.252637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.252666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.253043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.253072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.253500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.253530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.253871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.253900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.254275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.254304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.254666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.254695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.255060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.255087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.255465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.255494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.255859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.255887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.256251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.256280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.256644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.256673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.257034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.257064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.257463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.257492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.257739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.257770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.258127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.258163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.258536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.258564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.258920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.258948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.259307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.259337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.259704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.259733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.260117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.260147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.260508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.260536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.839 [2024-10-12 22:26:00.260890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.839 [2024-10-12 22:26:00.260918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.839 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.261285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.261314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.261670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.261698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.261976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.262004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.262254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.262287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.262641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.262670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.263021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.263049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.263433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.263463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.263835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.263862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.264229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.264259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.264641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.264668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.265037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.265065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.265446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.265477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.265834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.265863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.266228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.266258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.266622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.266651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.266997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.267025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.267356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.267385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.267736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.267764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.268122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.268151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.268411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.268440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.268823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.268851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.269216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.269246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.269637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.269665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.270033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.270062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.270502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.270532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.270897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.270925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.271261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.271291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.271657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.271685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.272051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.272078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.272515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.272544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.272904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.272931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.273283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.273313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.273671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.273707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.274063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.274092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.274474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.274503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.274868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.274897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.275304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.275335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.275696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.275725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.275974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.276006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.840 [2024-10-12 22:26:00.276362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.840 [2024-10-12 22:26:00.276393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.840 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.276763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.276791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.277172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.277202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.277474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.277502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.277828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.277856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.278215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.278245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.278611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.278640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.279001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.279031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.279387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.279417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.279763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.279791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.280162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.280191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.280428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.280458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.280721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.280748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.281092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.281141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.281548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.281577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.281945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.281973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.282338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.282368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.282729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.282756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.283136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.283165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.283545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.283574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.283954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.283984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.284354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.284383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.284645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.284674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.285044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.285073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.285445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.285475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.285828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.285857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.286253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.286282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.286718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.286749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.287100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.287142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.287526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.287555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.287912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.287941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.288284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.288314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.288694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.288723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.289088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.289142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.289519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.289549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.289793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.289822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.290207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.290238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.290601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.290629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.290897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.290926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.291080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.291118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.841 [2024-10-12 22:26:00.291499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.841 [2024-10-12 22:26:00.291527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.841 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.291910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.291938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.292317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.292348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.292724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.292754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.293081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.293120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.293465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.293495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.293864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.293893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.294259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.294291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.294650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.294680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.295091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.295131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.295476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.295504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.295850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.295880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.296222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.296252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.296644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.296674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.297022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.297052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:41.842 [2024-10-12 22:26:00.297329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.842 [2024-10-12 22:26:00.297359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:41.842 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.297700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.297735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.298121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.298150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.298391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.298419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.298816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.298845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.299204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.299235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.299393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.299425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.299764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.299792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.300164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.300194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.300560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.300591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.300957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.300986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.301348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.301378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.301632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.301659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.301920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.301949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.302284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.302313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.302668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.302698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.303066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.303094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.303444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.303473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.303838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.303866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.304235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.304265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.304642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.304670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.305048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.305075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.305468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.114 [2024-10-12 22:26:00.305497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.114 qpair failed and we were unable to recover it. 00:37:42.114 [2024-10-12 22:26:00.305811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.305843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.306198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.306228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.306485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.306512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.306898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.306926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.307276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.307306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.307647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.307675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.308031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.308060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.308434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.308463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.308722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.308750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.309121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.309152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.309402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.309431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.309798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.309826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.310195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.310224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.310480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.310507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.310868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.310897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.311266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.311295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.311658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.311687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.312123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.312152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.312488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.312519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.312773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.312801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.313171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.313201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.313549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.313577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.313821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.313858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.314210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.314241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.314614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.314643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.315007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.315035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.315461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.315490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.315899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.315926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.316289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.316318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.316699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.316727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.317085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.317126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.317353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.317380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.317671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.317708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.318098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.318140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.318528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.318556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.318915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.318942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.319324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.319354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.319717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.319746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.320116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.320144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.115 qpair failed and we were unable to recover it. 00:37:42.115 [2024-10-12 22:26:00.320502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-10-12 22:26:00.320530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.320865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.320894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.321314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.321346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.321750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.321778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.322048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.322075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.322485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.322514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.322937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.322964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.323328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.323358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.323706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.323734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.324113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.324143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.324517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.324545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.324919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.324947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.325332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.325360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.325703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.325730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.326165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.326194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.326531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.326558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.326930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.326959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.327311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.327342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.327576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.327608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.327979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.328008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.328355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.328385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.328642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.328670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.328930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.328958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.329308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.329344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.329713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.329741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.330117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.330146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.330513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.330541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.330951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.330979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.331331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.331360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.331751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.331779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.332124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.332154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.332503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.332532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.332899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.332926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.333306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.333335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.333685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.333714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.334057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.334085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.334364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.334392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.116 [2024-10-12 22:26:00.334760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-10-12 22:26:00.334789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.116 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.335060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.335088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.335459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.335488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.335752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.335779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.336033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.336061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.336457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.336487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.336849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.336876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.337259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.337290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.337660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.337690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.338041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.338069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.338402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.338432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.338811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.338840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.339213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.339242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.339630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.339660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.340023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.340052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.340308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.340339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.340723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.340753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.341129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.341159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.341523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.341551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.341808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.341836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.342218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.342248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.342650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.342679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.343158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.343188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.343603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.343632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.344005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.344032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.344292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.344322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.344686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.344720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.345069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.345099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.345456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.345485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.345865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.345893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.346336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.346366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.346730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.346758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.347128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.347157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.347530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.347557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.347950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.347979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.348358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.348388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.348749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.348777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.349174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.349204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.349456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.349484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.349726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.349757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.350130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-10-12 22:26:00.350160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.117 qpair failed and we were unable to recover it. 00:37:42.117 [2024-10-12 22:26:00.350519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.350546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.350793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.350821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.351131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.351160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.351522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.351550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.351915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.351943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.352207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.352236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.352606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.352634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.353018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.353046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.353326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.353355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.353701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.353729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.354087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.354126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.354483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.354512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.354735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.354764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.355128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.355159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.355535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.355563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.355924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.355952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.356327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.356356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.356700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.356729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.356975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.357002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.357363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.357393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.357687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.357714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.358118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.358147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.358496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.358525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.358890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.358918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.359285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.359314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.359758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.359800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.360190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.360219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.360603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.360631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.360880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.360907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.361160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.361189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.361551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.361578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.361936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.361963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.362332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.362362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.362603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.118 [2024-10-12 22:26:00.362631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.118 qpair failed and we were unable to recover it. 00:37:42.118 [2024-10-12 22:26:00.362975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.363004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.363406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.363436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.363874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.363902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.364238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.364268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.364640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.364668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.364886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.364914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.365166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.365194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.365539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.365567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.365880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.365908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.366181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.366210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.366588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.366616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.366873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.366905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.367267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.367296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.367572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.367599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.367961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.367990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.368340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.368371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.368698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.368726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.369021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.369048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.369313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.369346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.369701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.369730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.370113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.370142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.370512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.370541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.370913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.370942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.371305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.371335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.371722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.371750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.372135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.372165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.372516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.372545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.372918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.372947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.373328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.373356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.373732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.373760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.374028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.374055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.374440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.374476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.374838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.374867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.375228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.375257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.375632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.375660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.375926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.375954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.376238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.376269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.376645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.376673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.119 [2024-10-12 22:26:00.376936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.119 [2024-10-12 22:26:00.376963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.119 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.377324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.377354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.377730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.377758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.378006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.378034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.378443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.378472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.378824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.378852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.379229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.379258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.379618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.379646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.380009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.380039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.380282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.380313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.380648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.380677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.380915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.380947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.381211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.381241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.381633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.381661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.382029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.382056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.382330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.382360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.382712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.382740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.383160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.383190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.383555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.383585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.383951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.383978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.384358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.384388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.384759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.384787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.385168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.385198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.385567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.385595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.385873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.385904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.386301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.386331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.386696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.386725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.387068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.387096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.387482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.387511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.387866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.387894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.388264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.388294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.388705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.388734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.389087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.389128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.389544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.389578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.389949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.389977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.120 [2024-10-12 22:26:00.390342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.120 [2024-10-12 22:26:00.390371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.120 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.390731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.390759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.391134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.391163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.391539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.391567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.391924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.391952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.392231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.392259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.392632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.392660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.392999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.393027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.393362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.393391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.393742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.393770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.394143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.394172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.394533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.394560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.394931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.394961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.395259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.395288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.395643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.395672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.396058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.396086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.396460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.396489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.396845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.396874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.397233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.397263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.397637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.397665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.397793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.397824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.398220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.398251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.398588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.398617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.398867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.398897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.399287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.399317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.399680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.399708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.400067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.400095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.400469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.400497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.400858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.400887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.401226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.401257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.401686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.401714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.402070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.402100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.402566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.402595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.402960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.402988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.403354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.403382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.403732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.403759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.404134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.404164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.404606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.404634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.404988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.405021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.405346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.121 [2024-10-12 22:26:00.405376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.121 qpair failed and we were unable to recover it. 00:37:42.121 [2024-10-12 22:26:00.405726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.405757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.406124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.406154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.406510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.406538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.406909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.406938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.407195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.407223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.407590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.407618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.407974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.408004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.408360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.408388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.408748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.408776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.409151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.409181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.409541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.409568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.409905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.409934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.410274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.410304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.410651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.410678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.410940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.410971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.411372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.411402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.411758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.411786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.412154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.412184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.412544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.412572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.412933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.412961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.413330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.413359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.413740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.413768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.414040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.414068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.414445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.414474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.414833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.414862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.415259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.415290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.415656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.415685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.416057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.416085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.416425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.416454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.416821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.416848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.417215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.417246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.417617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.417645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.418027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.418055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.418306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.418335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.418725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.122 [2024-10-12 22:26:00.418753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.122 qpair failed and we were unable to recover it. 00:37:42.122 [2024-10-12 22:26:00.419046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.419082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.419441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.419470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.419814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.419843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.420217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.420253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.420598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.420627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.420864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.420895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.421151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.421183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.421530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.421557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.421918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.421947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.422334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.422363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.422759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.422787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.423046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.423073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.423487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.423516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.423899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.423926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.424291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.424320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.424678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.424707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.425070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.425097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.425466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.425496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.425836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.425863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.426233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.426264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.426634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.426662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.427043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.427071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.427454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.427484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.427852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.427881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.428291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.428320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.428672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.428699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.429070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.429098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.429450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.429478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.429858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.429887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.430299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.430329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.430762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.430791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.431133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.431163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.431519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.431546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.431907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.123 [2024-10-12 22:26:00.431935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.123 qpair failed and we were unable to recover it. 00:37:42.123 [2024-10-12 22:26:00.432317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.432347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.432691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.432718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.433090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.433130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.433496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.433525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.433874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.433902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.434197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.434226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.434595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.434623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.434960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.434989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.435372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.435401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.435773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.435812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.436168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.436197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.436556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.436585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.436961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.436989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.437362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.437391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.437747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.437775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.438142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.438172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.438533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.438561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.438812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.438841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.439179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.439209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.439560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.439589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.439958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.439985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.440347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.440377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.440739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.440768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.441127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.441156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.441514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.441541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.441899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.441929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.442290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.442319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.442669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.442697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.443055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.443083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.443357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.443385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.443749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.443777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.444136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.444167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.444526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.444553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.444910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.444938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.445283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.445312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.445672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.445700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.445937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.445969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.446307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.446338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.446721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.446750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.447123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.124 [2024-10-12 22:26:00.447153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.124 qpair failed and we were unable to recover it. 00:37:42.124 [2024-10-12 22:26:00.447523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.447551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.447763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.447791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.448178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.448207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.448551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.448580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.448939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.448967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.449368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.449398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.449750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.449778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.450123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.450152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.450501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.450530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.450903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.450937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.451282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.451313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.451563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.451594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.451951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.451978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.452346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.452375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.452728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.452756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.453122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.453151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.453497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.453526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.453868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.453896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.454260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.454290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.454539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.454567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.454908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.454936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.455325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.455356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.455717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.455745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.456140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.456169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.456507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.456534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.456779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.456807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.457158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.457187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.457413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.457443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.457685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.457713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.458073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.458101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.458356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.458385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.458753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.458782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.459205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.459234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.459591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.459619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.459986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.460014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.460374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.460404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.460787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.460816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.461064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.461093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.125 qpair failed and we were unable to recover it. 00:37:42.125 [2024-10-12 22:26:00.461479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.125 [2024-10-12 22:26:00.461508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.461851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.461881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.462253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.462283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.462637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.462665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.463034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.463062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.463466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.463495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.463859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.463886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.464235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.464265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.464616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.464645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.465020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.465048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.465286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.465318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.465680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.465714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.466069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.466097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.466473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.466503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.466856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.466883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.467230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.467259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.467641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.467669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.468015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.468044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.468431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.468461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.468816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.468844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.469219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.469247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.469633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.469662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.470014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.470042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.470399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.470429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.470794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.470822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.471260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.471289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.471661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.471688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.471942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.471971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.472334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.472364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.472724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.472752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.473121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.473150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.473498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.473527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.473875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.473904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.474268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.474297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.474672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.474700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.475065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.475092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.475478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.475506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.475866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.475893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.126 [2024-10-12 22:26:00.476249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.126 [2024-10-12 22:26:00.476280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.126 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.476721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.476749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.477079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.477119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.477484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.477512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.477871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.477899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.478260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.478289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.478652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.478680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.479054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.479082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.479265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.479293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.479674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.479704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.480068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.480096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.480435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.480464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.480801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.480830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.481180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.481216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.481611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.481639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.482005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.482032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.482401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.482430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.482791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.482819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.483182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.483210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.483568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.483595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.483844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.483871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.484286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.484316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.484677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.484705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.485089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.485129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.485483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.485511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.485877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.485906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.486259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.486289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.486646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.486674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.487023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.487051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.487491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.487521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.487884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.487911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.488282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.488310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.488551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.488582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.488976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.489004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.489349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.127 [2024-10-12 22:26:00.489378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.127 qpair failed and we were unable to recover it. 00:37:42.127 [2024-10-12 22:26:00.489738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.489766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.490132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.490163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.490533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.490560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.490923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.490950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.491328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.491356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.491707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.491736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.492093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.492131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.492499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.492528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.492857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.492885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.493247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.493277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.493643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.493671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.494032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.494061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.494425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.494454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.494816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.494844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.495189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.495219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.495598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.495626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.495967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.495995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.496359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.496387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.496743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.496772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.497137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.497167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.497525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.497553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.497920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.497948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.498306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.498336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.498694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.498721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.499095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.499136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.499467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.499495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.499864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.499891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.500242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.500271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.500627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.500655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.501014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.501043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.501408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.501438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.501781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.501808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.502065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.502093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.502464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.502493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.502862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.502890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.128 qpair failed and we were unable to recover it. 00:37:42.128 [2024-10-12 22:26:00.503259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.128 [2024-10-12 22:26:00.503289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.503505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.503533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.503912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.503941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.504283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.504312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.504674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.504701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.505067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.505095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.505474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.505503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.505831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.505862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.506116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.506146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.506494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.506523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.506888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.506925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.507264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.507293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.507674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.507702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.508064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.508091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.508464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.508493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.508849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.508878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.509244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.509273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.509637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.509667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.510024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.510052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.510396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.510425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.510792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.510820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.511190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.511219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.511660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.511689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.512017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.512045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.512416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.512446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.512812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.512840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.513195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.513225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.513587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.513615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.513972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.513999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.514365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.514394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.514742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.514771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.515120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.515151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.515533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.515561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.515878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.515906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.516282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.516311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.516684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.516713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.517078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.517116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.517467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.517495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.129 qpair failed and we were unable to recover it. 00:37:42.129 [2024-10-12 22:26:00.517854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.129 [2024-10-12 22:26:00.517882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.518147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.518176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.518548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.518576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.518948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.518976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.519352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.519382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.519721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.519749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.520127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.520158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.520521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.520549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.520917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.520945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.521329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.521358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.521732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.521760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.522125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.522155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.522517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.522551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.522802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.522834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.523117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.523147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.523536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.523564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.523917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.523945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.524193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.524222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.524604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.524633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.524991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.525018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.525401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.525430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.525803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.525831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.526117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.526145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.526510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.526537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.526901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.526929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.527278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.527307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.527670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.527698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.528058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.528086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.528462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.528490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.528845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.528873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.529263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.529293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.529521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.529548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.529802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.529832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.530205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.530234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.530600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.530628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.530993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.531020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.531370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.130 [2024-10-12 22:26:00.531400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.130 qpair failed and we were unable to recover it. 00:37:42.130 [2024-10-12 22:26:00.531662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.531690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.532054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.532081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.532450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.532479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.532829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.532857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.533223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.533252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.533642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.533670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.534099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.534138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.534484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.534512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.534896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.534924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.535288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.535319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.535661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.535689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.535838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.535868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.536138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.536169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.536549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.536578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.536922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.536952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.537408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.537443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.537762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.537789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.538149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.538178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.538559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.538586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.539028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.539055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.539392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.539421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.539789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.539816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.540182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.540212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.540577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.540605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.540970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.540997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.541346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.541376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.541629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.541658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.542048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.542077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.542438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.542466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.542814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.542844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.543214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.543244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.543637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.543665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.544030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.544059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.544489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.544520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.544772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.544799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.545167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.545197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.545556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.545583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.545934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.131 [2024-10-12 22:26:00.545962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.131 qpair failed and we were unable to recover it. 00:37:42.131 [2024-10-12 22:26:00.546357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.546387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.546701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.546728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.547089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.547127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.547421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.547449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.547880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.547909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.548262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.548292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.548635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.548664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.548927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.548956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.549319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.549349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.549731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.549758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.550162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.550191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.550427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.550457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.550677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.550709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.551047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.551076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.551394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.551423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.551783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.551810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.552170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.552200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.552577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.552612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.552982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.553010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.553356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.553385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.553641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.553668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.554050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.554078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.554451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.554480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.554843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.554872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.555241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.555270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.555548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.555575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.555946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.555974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.556329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.556361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.556726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.556754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.132 [2024-10-12 22:26:00.557007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.132 [2024-10-12 22:26:00.557037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.132 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.557409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.557439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.557716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.557745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.558112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.558142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.558486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.558515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.558767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.558795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.559043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.559070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.559405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.559435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.559817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.559845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.560224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.560252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.560644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.560673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.560912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.560939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.561324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.561354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.561719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.561747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.562120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.562149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.562518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.562547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.562859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.562887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.563249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.563279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.563639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.563669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.564050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.564078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.564509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.564538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.564897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.564924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.565252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.565282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.565530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.565558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.565870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.565898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.566262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.566292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.566628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.566657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.567035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.567063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.567478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.567514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.567759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.567790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.568041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.568070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.568444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.568474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.568814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.568843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.569198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.569229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.569553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.569580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.569962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.569990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.570410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.570439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.570732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.570761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.571123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.571153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.133 qpair failed and we were unable to recover it. 00:37:42.133 [2024-10-12 22:26:00.571386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.133 [2024-10-12 22:26:00.571415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.571792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.571821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.572184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.572213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.572573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.572602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.572958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.572987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.573326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.573365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.573711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.573739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.574124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.574153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.574525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.574555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.574813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.574840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.575122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.575152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.575526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.575555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.575785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.575813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.576180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.576210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.576457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.576485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.576872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.576901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.577273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.577303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.577645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.577673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.578036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.578064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.578426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.578454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.578812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.578840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.579205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.579235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.579612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.579640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.580010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.580040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.580404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.580434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.580805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.580834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.581185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.581215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.581603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.581633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.582007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.582035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.582399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.582435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.582786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.582813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.583228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.583258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.583628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.583656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.583904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.583932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.584272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.584309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.584640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.584669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.134 [2024-10-12 22:26:00.585058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.134 [2024-10-12 22:26:00.585086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.134 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.585468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.585497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.585753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.585784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.586161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.586193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.586567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.586596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.586850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.586878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.587237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.587266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.587607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.587637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.588014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.588042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.588416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.588446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.588813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.588843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.589212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.589242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.589359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.589389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.589755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.589784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.135 [2024-10-12 22:26:00.590135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.135 [2024-10-12 22:26:00.590165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.135 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.590519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.590550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.590909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.590937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.591307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.591337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.591712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.591740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.592120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.592149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.592560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.592590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.592956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.592986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.593249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.593278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.593634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.593663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.593909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.593940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.594372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.594402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.594760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.594789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.595146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.595175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.595558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.595586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.595967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.595996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.596287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.596316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.596681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.596709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.596974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.597001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.597356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.597392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.597732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.597761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.598127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.598158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.598589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.598617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.598975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.599003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.599305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.599334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.599698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.599727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.600093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.600133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.600481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.407 [2024-10-12 22:26:00.600510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.407 qpair failed and we were unable to recover it. 00:37:42.407 [2024-10-12 22:26:00.600873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.600902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.601268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.601297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.601641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.601669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.601937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.601965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.602327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.602357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.602737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.602765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.603176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.603204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.603544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.603572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.603851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.603879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.604225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.604255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.604609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.604638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.604882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.604910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.605313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.605343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.605597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.605625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.605838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.605884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.606145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.606176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.606423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.606450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.606836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.606864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.607252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.607282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.607657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.607685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.608022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.608051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.608293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.608326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.608716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.608746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.609098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.609138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.609321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.609350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.609724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.609751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.610122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.610153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.610445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.610473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.610708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.610738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.611090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.611131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.611488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.611516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.611888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.611922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.612288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.408 [2024-10-12 22:26:00.612318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.408 qpair failed and we were unable to recover it. 00:37:42.408 [2024-10-12 22:26:00.612707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.612735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.613126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.613155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.613515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.613544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.613965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.613994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.614363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.614393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.614753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.614780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.615153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.615184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.615544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.615572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.615932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.615960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.616225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.616254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.616509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.616538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.616754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.616781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.617218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.617248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.617595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.617623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.617867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.617895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.618230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.618259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.618612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.618642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.619005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.619033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.619388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.619417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.619789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.619818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.620170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.620199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.620583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.620611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.621051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.621079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.621441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.621471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.621836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.621864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.622210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.622241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.622605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.622633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.622881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.622908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.623063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.623093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.409 [2024-10-12 22:26:00.623493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.409 [2024-10-12 22:26:00.623522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.409 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.623894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.623923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.624285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.624315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.624668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.624697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.625072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.625099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.625470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.625499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.625863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.625891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.626248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.626277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.626531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.626559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.626901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.626935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.627284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.627313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.627675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.627703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.627918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.627945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.628196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.628226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.628556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.628583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.628956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.628983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.629399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.629429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.629717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.629745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.630189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.630219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.630580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.630609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.630991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.631020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.631377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.631407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.631780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.631808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.632163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.632194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.632587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.632614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.632985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.633013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.633265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.633293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.633451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.633477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.633813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.633841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.410 [2024-10-12 22:26:00.634216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.410 [2024-10-12 22:26:00.634246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.410 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.634622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.634650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.634964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.634991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.635350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.635379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.635724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.635753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.635995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.636023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.636360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.636389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.636759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.636788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.637153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.637183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.637557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.637585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.637924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.637953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.638297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.638326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.638694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.638722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.638969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.638999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.639174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.639203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.639573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.639600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.639967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.639994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.640439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.640468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.640834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.640861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.641205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.641234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.641606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.641640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.641996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.642023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.642373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.642403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.642775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.642804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.643175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.643206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.643441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.643471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.643844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.643872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.644226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.644257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.644730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.644758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.411 qpair failed and we were unable to recover it. 00:37:42.411 [2024-10-12 22:26:00.645087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.411 [2024-10-12 22:26:00.645157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.645531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.645559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.645793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.645820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.646185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.646214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.646591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.646619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.646949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.646977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.647326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.647356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.647718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.647745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.648116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.648146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.648520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.648548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.648892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.648919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.649276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.649306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.649662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.649690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.650056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.650086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.650451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.650479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.650841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.650868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.651224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.651253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.651684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.651711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.652046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.652076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.652420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.652448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.652810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.652837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.653192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.653222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.412 qpair failed and we were unable to recover it. 00:37:42.412 [2024-10-12 22:26:00.653576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.412 [2024-10-12 22:26:00.653604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.653967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.653996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.654341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.654370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.654572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.654601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.654950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.654979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.655337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.655367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.655739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.655767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.656131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.656161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.656528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.656556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.656890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.656925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.657283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.657313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.657681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.657709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.657983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.658010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.658381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.658409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.658763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.658791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.659243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.659272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.659631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.659659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.660064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.660092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.660441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.660469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.660811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.660839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.661221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.661251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.661630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.661657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.662041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.662068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.662436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.662466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.662826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.662853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.663221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.663250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.663490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.663521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.663884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.663912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.664258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.413 [2024-10-12 22:26:00.664288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.413 qpair failed and we were unable to recover it. 00:37:42.413 [2024-10-12 22:26:00.664677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.664705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.665061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.665089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.665471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.665499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.665862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.665891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.666269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.666300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.666548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.666575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.666822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.666853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.667225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.667255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.667612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.667640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.668022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.668050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.668346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.668375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.668738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.668765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.669128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.669156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.669519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.669548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.669806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.669833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.670196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.670226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.670645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.670673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.671064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.671091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.671367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.671395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.671743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.671770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.672133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.672168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.672563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.672591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.672969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.672996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.673340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.673376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.673736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.673764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.674126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.674156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.674539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.674566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.674937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.674965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.414 [2024-10-12 22:26:00.675321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.414 [2024-10-12 22:26:00.675351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.414 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.675707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.675735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.676100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.676139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.676404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.676432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.676860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.676888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.677136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.677167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.677426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.677458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.677814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.677843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.678229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.678258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.678616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.678645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.679007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.679035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.679370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.679400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.679762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.679790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.680149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.680179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.680560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.680588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.680945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.680974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.681228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.681257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.681635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.681663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.682031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.682059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.682465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.682495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.682860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.682887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.683241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.683271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.683646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.683674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.684084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.684121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.684458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.684488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.684849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.684877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.685256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.685285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.685681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.685708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.686074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.415 [2024-10-12 22:26:00.686112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.415 qpair failed and we were unable to recover it. 00:37:42.415 [2024-10-12 22:26:00.686480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.686508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.686856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.686885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.687128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.687161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.687547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.687575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.687896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.687924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.688280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.688310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.688659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.688689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.689043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.689070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.689475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.689505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.689863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.689892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.690260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.690290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.690669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.690697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.690950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.690977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.691345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.691374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.691732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.691759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.692117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.692146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.692528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.692556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.692927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.692955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.693329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.693359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.693717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.693746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.694115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.694145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.694525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.694553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.694908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.694936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.695180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.695212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.695569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.695597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.695853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.695884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.696219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.696248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.696620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.696648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.416 qpair failed and we were unable to recover it. 00:37:42.416 [2024-10-12 22:26:00.697049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.416 [2024-10-12 22:26:00.697077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.697342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.697371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.697719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.697754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.698125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.698154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.698509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.698537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.698877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.698905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.699275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.699305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.699673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.699701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.700055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.700083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.700439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.700467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.700832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.700860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.701233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.701263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.701536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.701563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.701922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.701950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.702322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.702353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.702716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.702745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.703116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.703146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.703422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.703450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.703730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.703758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.704135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.704165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.704493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.704521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.704874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.704902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.705248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.705278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.705647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.705676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.706113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.706144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.706515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.706543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.706913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.706941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.707285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.707314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.707676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.707704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.708067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.417 [2024-10-12 22:26:00.708096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.417 qpair failed and we were unable to recover it. 00:37:42.417 [2024-10-12 22:26:00.708457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.708486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.708859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.708887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.709246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.709276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.709635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.709663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.710034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.710062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.710420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.710450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.710809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.710837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.711209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.711238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.711623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.711651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.712020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.712049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.712405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.712434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.712802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.712830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.713196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.713233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.713587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.713616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.713996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.714024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.714489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.714518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.714871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.714898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.715247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.715276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.715525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.715552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.715916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.715943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.716294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.716324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.716708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.716737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.717140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.717171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.717558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.717585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.418 qpair failed and we were unable to recover it. 00:37:42.418 [2024-10-12 22:26:00.717946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.418 [2024-10-12 22:26:00.717974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.718329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.718359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.718724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.718752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.719123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.719153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.719551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.719580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.719933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.719962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.720332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.720363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.720727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.720755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.721123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.721153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.721511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.721539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.721903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.721931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.722282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.722312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.722667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.722696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.723063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.723091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.723341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.723369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.723730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.723758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.724126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.724156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.724527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.724555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.724957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.724985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.725354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.725383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.725746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.725774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.726141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.726171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.726530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.726557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.726918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.726946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.419 [2024-10-12 22:26:00.727307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.419 [2024-10-12 22:26:00.727337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.419 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.727702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.727730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.728086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.728123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.728478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.728505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.728868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.728902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.729240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.729270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.729638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.729666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.730021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.730049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.730443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.730473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.730834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.730861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.731216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.731246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.731610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.731638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.732011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.732039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.732405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.732433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.732775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.732803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.733169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.733198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.733551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.733579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.733943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.733971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.734342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.734370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.734715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.734743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.735115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.735144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.735496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.735523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.735884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.735911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.736165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.736193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.736438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.736466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.736826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.736855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.737178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.737236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.737620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.737647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.738009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.738037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.738413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.738442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.738803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.738831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.739190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.739219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.739614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.739642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.740017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.740044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.740410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.740440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.740857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.740885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.420 qpair failed and we were unable to recover it. 00:37:42.420 [2024-10-12 22:26:00.741282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.420 [2024-10-12 22:26:00.741311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.741585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.741613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.741956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.741984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.742325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.742355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.742720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.742748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.743120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.743149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.743499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.743526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.743899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.743926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.744271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.744308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.744645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.744673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.745034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.745062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.745509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.745540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.745970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.745998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.746362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.746390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.746643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.746673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.747049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.747077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.747508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.747537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.747910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.747937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.748187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.748219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.748569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.748597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.748823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.748851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.749125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.749154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.749556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.749585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.749960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.749988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.750328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.750358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.750712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.750742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.751082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.751119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.751452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.751481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.751835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.751863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.752221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.752251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.752627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.752655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.753016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.753044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.753310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.753339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.753718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.753746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.754117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.754146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.754502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.421 [2024-10-12 22:26:00.754530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.421 qpair failed and we were unable to recover it. 00:37:42.421 [2024-10-12 22:26:00.754871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.754898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.755271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.755300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.755646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.755674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.756043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.756072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.756411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.756440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.756802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.756830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.757179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.757209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.757575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.757604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.757757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.757788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.758157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.758187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.758544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.758573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.758933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.758961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.759323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.759360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.759716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.759743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.760113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.760144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.760518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.760548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.760879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.760906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.761260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.761290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.761655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.761682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.762037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.762066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.762459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.762488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.762849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.762876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.763126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.763158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.763493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.763522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.763932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.763961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.764334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.764364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.764729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.764758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.765184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.765214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.765594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.765622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.766005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.766033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.766396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.766425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.766788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.766815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.767178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.767207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.767584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.767612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.767968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.767996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.768368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.768397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.768655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.768682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.422 [2024-10-12 22:26:00.769031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.422 [2024-10-12 22:26:00.769058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.422 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.769396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.769425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.769756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.769784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.770148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.770183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.770548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.770576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.770959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.770987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.771248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.771278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.771630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.771658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.772031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.772060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.772448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.772480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.772879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.772909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3777980 Killed "${NVMF_APP[@]}" "$@" 00:37:42.423 [2024-10-12 22:26:00.773290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.773320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.773684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.773712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:42.423 [2024-10-12 22:26:00.774069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.774097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:42.423 [2024-10-12 22:26:00.774472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.774502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:42.423 [2024-10-12 22:26:00.774829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.774858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:42.423 [2024-10-12 22:26:00.775227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.423 [2024-10-12 22:26:00.775258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.775617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.775645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.776009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.776037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.776404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.776434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.776799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.776827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.777200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.777230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.777603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.777631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.777991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.778020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.778394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.778423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.778781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.778810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.779053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.779085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.779531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.779561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.779927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.779956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.780347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.780377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.780736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.780764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.781121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.781150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.781524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.781552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.423 qpair failed and we were unable to recover it. 00:37:42.423 [2024-10-12 22:26:00.781967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.423 [2024-10-12 22:26:00.781995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.782369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.782399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.782755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.782784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.783148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.783178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.783550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.783577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # nvmfpid=3779004 00:37:42.424 [2024-10-12 22:26:00.783952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # waitforlisten 3779004 00:37:42.424 [2024-10-12 22:26:00.783983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3779004 ']' 00:37:42.424 [2024-10-12 22:26:00.784376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.784407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:42.424 [2024-10-12 22:26:00.784753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.784790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:42.424 [2024-10-12 22:26:00.785135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:42.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:42.424 [2024-10-12 22:26:00.785165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:42.424 [2024-10-12 22:26:00.785541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.785572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 22:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:42.424 [2024-10-12 22:26:00.785966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.786002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.786320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.786350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.786717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.786748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.787081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.787121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.787521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.787550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.787911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.787940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.788094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.788141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.788522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.788553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.788747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.788777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.789150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.789180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.789550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.789579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.789830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.789860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.790141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.790170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.790537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.790569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.790943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.790972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.791369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.791398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.791737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.791768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.792149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.792180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.792546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.792582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.424 [2024-10-12 22:26:00.792986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.424 [2024-10-12 22:26:00.793014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.424 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.793386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.793417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.793805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.793834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.794098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.794137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.794513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.794541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.794809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.794837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.795184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.795215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.795630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.795660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.796030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.796060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.796423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.796452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.796711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.796739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.797094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.797136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.797427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.797456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.797818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.797851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.798124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.798154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.798511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.798547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.798917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.798949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.799247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.799278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.799660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.799689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.799930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.799957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.800330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.800359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.800692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.800721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.801116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.801145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.801523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.801551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.801916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.801944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.802385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.802417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.802778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.802807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.803065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.803094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.803517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.803546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.803902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.803931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.804318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.804350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.804597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.804625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.804903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.804931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.805253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.805283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.805685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.805713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.806084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.425 [2024-10-12 22:26:00.806126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.425 qpair failed and we were unable to recover it. 00:37:42.425 [2024-10-12 22:26:00.806585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.806613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.806980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.807010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.807256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.807286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.807565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.807600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.807971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.807999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.808253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.808284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.808645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.808673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.809030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.809058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.809370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.809399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.809759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.809787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.810029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.810060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.810447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.810478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.810868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.810896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.811158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.811188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.811593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.811623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.811863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.811891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.812258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.812289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.812716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.812746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.813084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.813123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.813539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.813567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.813951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.813981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.814380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.814410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.814769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.814797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.815035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.815063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.815357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.815389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.815758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.815787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.816023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.816052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.816420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.816451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.816797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.816827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.817229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.817260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.817685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.817716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.818089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.818129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.818404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.818433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.818669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.818702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.819125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.819156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.819426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.819454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.819819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.819847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.820225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.820254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.426 [2024-10-12 22:26:00.820649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.426 [2024-10-12 22:26:00.820677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.426 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.820891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.820918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.821319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.821349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.821679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.821708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.822058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.822086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.822473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.822510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.822876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.822903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.823312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.823340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.823722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.823750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.824129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.824159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.824533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.824560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.824940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.824969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.825406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.825438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.825812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.825840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.826118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.826148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.826519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.826548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.826931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.826959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.827324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.827353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.827800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.827828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.828082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.828120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.828528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.828558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.828923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.828953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.829362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.829393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.829783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.829811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.830134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.830164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.830525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.830554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.830917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.830948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.831202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.831232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.831484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.831512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.831870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.831901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.832173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.832203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.832422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.832450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.832712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.832744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.833017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.833046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.833424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.833454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.833844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.833873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.834248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.834278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.834656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.834685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.835113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.835142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.835564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.427 [2024-10-12 22:26:00.835593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.427 qpair failed and we were unable to recover it. 00:37:42.427 [2024-10-12 22:26:00.835969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.835999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.836377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.836407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.836781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.836810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.837090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.837136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.837523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.837551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.837923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.837958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.838348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.838379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.838779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.838810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.839166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.839201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.839567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.839596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.839699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:42.428 [2024-10-12 22:26:00.839772] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:42.428 [2024-10-12 22:26:00.839976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.840008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.840376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.840405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.840791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.840821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.841084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.841130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.841355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.841385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.841701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.841730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.841992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.842025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.842358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.842397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.842758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.842788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.843186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.843219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.843605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.843635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.843870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.843900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.844271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.844302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.844680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.844710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.845093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.845161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.845542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.845572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.845941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.845970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.846365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.846396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.846763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.846792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.847188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.847218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.847601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.847630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.848050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.848079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.428 [2024-10-12 22:26:00.848464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.428 [2024-10-12 22:26:00.848495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.428 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.848758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.848786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.849087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.849129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.849543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.849574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.849942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.849971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.850212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.850244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.850617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.850647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.851043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.851074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.851439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.851470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.851845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.851876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.852237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.852268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.852631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.852663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.853158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.853189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.853545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.853573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.853935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.853964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.854311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.854342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.854727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.854756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.855141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.855171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.855438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.855466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.855837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.855865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.856239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.856268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.856540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.856570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.856911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.856941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.857309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.857339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.857706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.857735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.858010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.858039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.858416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.858447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.858827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.858856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.859226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.859257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.859653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.859682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.860064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.860092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.860443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.860474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.860692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.860720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.861093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.861140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.861417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.861447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.861896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.861925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.862281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.862311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.862566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.862595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.429 [2024-10-12 22:26:00.863009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.429 [2024-10-12 22:26:00.863038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.429 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.863420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.863449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.863805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.863835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.864231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.864263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.864748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.864777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.865129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.865160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.865530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.865559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.865937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.865966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.866364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.866393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.866628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.866658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.867038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.867067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.867421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.867450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.867832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.867862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.868136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.868167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.868529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.868564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.868920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.868949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.869345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.869375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.869744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.869772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.870209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.870239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.870625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.870653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.871039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.871067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.871476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.871505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.871848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.871876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.872232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.872268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.872646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.872675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.873052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.873083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.873493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.873523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.873912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.873942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.874330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.874360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.874609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.874638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.874894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.874923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.875317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.875347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.875727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.875756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.876037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.876066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.876476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.876506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.876879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.876908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.877215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.877246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.877482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.877510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.877886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.877914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.430 [2024-10-12 22:26:00.878282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.430 [2024-10-12 22:26:00.878312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.430 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.878676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.878704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.878968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.878996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.879389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.879418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.879671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.879698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.880071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.880100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.880501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.880529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.880907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.880934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.881370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.881400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.881773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.881802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.882180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.882210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.882358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.882385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.882776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.882806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.431 [2024-10-12 22:26:00.883074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.431 [2024-10-12 22:26:00.883113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.431 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.883440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.883472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.883848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.883886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.884137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.884170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.884346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.884374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.884751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.884779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.885171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.885203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.885450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.885478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.885847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.703 [2024-10-12 22:26:00.885875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.703 qpair failed and we were unable to recover it. 00:37:42.703 [2024-10-12 22:26:00.886252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.886283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.886643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.886671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.886911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.886939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.887206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.887235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.887618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.887646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.888023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.888053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.888414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.888443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.888889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.888918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.889171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.889200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.889459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.889487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.889853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.889880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.890247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.890277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.890646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.890675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.891062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.891091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.891361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.891390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.891757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.891786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.892192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.892222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.892592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.892620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.892979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.893009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.893376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.893407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.893803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.893831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.894215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.894245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.894618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.894646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.895009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.895039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.895387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.895417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.895673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.895701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.895964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.895992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.896386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.896416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.896769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.896798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.897045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.897074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.897343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.897373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.897591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.897620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.897989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.898018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.898405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.898442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.898671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.898700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.899165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.899195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.899565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.899592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.899981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.704 [2024-10-12 22:26:00.900009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.704 qpair failed and we were unable to recover it. 00:37:42.704 [2024-10-12 22:26:00.900443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.900473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.900833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.900862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.901262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.901291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.901673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.901702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.902089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.902128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.902498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.902525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.902927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.902957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.903333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.903362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.903740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.903768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.904155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.904184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.904569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.904598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.904995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.905023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.905393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.905422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.905804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.905832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.906222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.906253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.906631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.906659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.907031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.907059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.907419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.907448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.907849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.907878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.908242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.908272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.908655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.908683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.909057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.909086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.909476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.909505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.909893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.909922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.910296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.910325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.910698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.910727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.910982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.911009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.911368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.911398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.911658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.911686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.912042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.912070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.912351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.912380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.912624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.912652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.912906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.912943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.913146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.913176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.913571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.913598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.913974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.914007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.914377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.914407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.914764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.705 [2024-10-12 22:26:00.914792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.705 qpair failed and we were unable to recover it. 00:37:42.705 [2024-10-12 22:26:00.915198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.915228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.915601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.915628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.915956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.915984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.916276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.916306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.916537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.916564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.916797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.916825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.917193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.917224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.917603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.917633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.918004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.918032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.918371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.918400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.918664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.918696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.919005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.919041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.919302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.919333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.919703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.919731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.920113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.920144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.920524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.920553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.920833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.920860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.921231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.921262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.921633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.921662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.922012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.922040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.922438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.922467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.922843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.922871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.923284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.923313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.923674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.923703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.924076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.924115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.924563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.924592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.924956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.924984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.925350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.925381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.925642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.925671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.925968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.925997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.926386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.926416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.926756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.926785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.927179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.927209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.927652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.927679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.928054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.928082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.928465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.928494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.928757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.928785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.929144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.929180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.929532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.706 [2024-10-12 22:26:00.929561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.706 qpair failed and we were unable to recover it. 00:37:42.706 [2024-10-12 22:26:00.929811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.929838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.930202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.930231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.930621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.930649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.931026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.931053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.931432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.931461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.931836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.931865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.932247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.932283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.932685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.932712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.933087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.933127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.933502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.933531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.933662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.933693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.934075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.934116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.934355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.934383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.934625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.934654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.934891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.934919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.935266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.935297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.935650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.935680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.936076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.936115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.936451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.936480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.936815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:42.707 [2024-10-12 22:26:00.936855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.936883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.937264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.937293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.937654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.937683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.937948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.937976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.938228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.938260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.938505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.938533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.938910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.938940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.939328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.939359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.939729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.939757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.940153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.940182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.940573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.940602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.940966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.940994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.941349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.941380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.941728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.941756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.942156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.942187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.942557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.942584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.942963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.942991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.943361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.943392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.943806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.943836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.944185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.944216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.944557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.707 [2024-10-12 22:26:00.944585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.707 qpair failed and we were unable to recover it. 00:37:42.707 [2024-10-12 22:26:00.944943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.944972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.945198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.945231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.945590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.945619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.946006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.946035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.946405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.946435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.946793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.946822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.947080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.947136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.947428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.947456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.947829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.947858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.948242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.948273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.948546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.948573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.948936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.948970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.949329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.949359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.949597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.949625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.949845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.949873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.950249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.950279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.950659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.950687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.951093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.951132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.951505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.951533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.951756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.951784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.952132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.952162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.952558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.952586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.952974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.953001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.953357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.953388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.953773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.953802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.954230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.954260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.954636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.954664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.955033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.955061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.955456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.955485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.708 [2024-10-12 22:26:00.955750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.708 [2024-10-12 22:26:00.955777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.708 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.956160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.956190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.956575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.956604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.956811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.956839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.957239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.957269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.957646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.957674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.958093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.958132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.958486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.958515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.958756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.958785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.959183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.959214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.959442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.959470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.959857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.959885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.960133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.960164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.960526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.960555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.960784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.960812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.961087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.961142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.961527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.961558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.961824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.961856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.962225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.962257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.962524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.962552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.962932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.962961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.963214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.963245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.963594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.963632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.963915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.963944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.964337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.964368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.964750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.964781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.965143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.965173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.965557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.965585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.965977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.966008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.966411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.966441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.966802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.966832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.967189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.967218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.967608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.967636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.967897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.967924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.968195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.968225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.968486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.968518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.968908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.968937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.969285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.969316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.969663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.969691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.970057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.970085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.970345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.709 [2024-10-12 22:26:00.970376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.709 qpair failed and we were unable to recover it. 00:37:42.709 [2024-10-12 22:26:00.970747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.970776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.971141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.971171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.971538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.971566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.971939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.971967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.972328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.972358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.972726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.972755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.972976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.973003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.973387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.973417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.973667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.973700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.974063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.974093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.974475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.974505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.974836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.974864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.975117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.975151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.975511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.975539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.975906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.975934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.976319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.976348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.976696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.976723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.977045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.977074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.977461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.977492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.977866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.977895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.978140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.978169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.978599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.978634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.978976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.979006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.979377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.979407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.979759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.979789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.980204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.980234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.980458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.980486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.980873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.980901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.981250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.981281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.981654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.981684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.982046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.982075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.982481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.982510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.982855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.982884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.983227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.983258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.983600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.983628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.984013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.984043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.984378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.710 [2024-10-12 22:26:00.984408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.710 qpair failed and we were unable to recover it. 00:37:42.710 [2024-10-12 22:26:00.984772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.984801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.985165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.985194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.985573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.985602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.985936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.985964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.986318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.986347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.986710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.986738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.987113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.987142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.987480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.987508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.987704] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.711 [2024-10-12 22:26:00.987758] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.711 [2024-10-12 22:26:00.987766] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.711 [2024-10-12 22:26:00.987773] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.711 [2024-10-12 22:26:00.987780] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.711 [2024-10-12 22:26:00.987868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.987896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.987949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:37:42.711 [2024-10-12 22:26:00.988247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.988278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 [2024-10-12 22:26:00.988177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.988329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:37:42.711 [2024-10-12 22:26:00.988330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:37:42.711 [2024-10-12 22:26:00.988672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.988702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.989029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.989057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.989486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.989517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.989919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.989947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.990346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.990374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.990738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.990766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.991139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.991167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.991435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.991463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.991810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.991838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.992186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.992216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.992483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.992511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.992737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.992771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.993127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.993157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.993388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.993416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.993834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.993862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.994222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.994251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.994574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.994603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.994952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.994980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.995235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.995265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.995564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.995592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.995949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.995978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.996341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.996372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.996577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.996606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.996752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.996780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.997043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.997075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.997359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.711 [2024-10-12 22:26:00.997389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.711 qpair failed and we were unable to recover it. 00:37:42.711 [2024-10-12 22:26:00.997739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.997768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:00.998116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.998146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:00.998495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.998522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:00.998888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.998916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:00.999285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.999315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:00.999666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:00.999694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.000011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.000045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.000403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.000433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.000681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.000709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.000976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.001005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.001387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.001417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.001778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.001808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.002159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.002191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.002540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.002567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.002939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.002967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.003308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.003338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.003588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.003616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.003990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.004018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.004404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.004433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.004770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.004799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.005158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.005189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.005551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.005579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.005945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.005973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.006359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.006391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.006609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.006638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.006928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.006962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.007291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.007322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.007673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.007702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.007923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.007954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.008333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.008363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.008708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.008737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.009072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.009100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.009466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.009495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.712 qpair failed and we were unable to recover it. 00:37:42.712 [2024-10-12 22:26:01.009859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.712 [2024-10-12 22:26:01.009887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.010194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.010226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.010568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.010599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.010949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.010979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.011334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.011365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.011682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.011712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.012074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.012113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.012467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.012498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.012717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.012745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.013138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.013168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.013496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.013525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.013885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.013913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.014267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.014298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.014644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.014673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.015015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.015044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.015448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.015478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.015833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.015861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.016204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.016232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.016520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.016549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.016906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.016935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.017311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.017341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.017693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.017721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.017929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.017956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.018335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.018364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.018603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.018630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.018764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.018791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.019151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.019182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.019548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.019578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.019926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.019955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.020341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.020370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.020737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.020766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.020996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.021024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.021389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.021426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.021782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.021810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.022026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.022054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.022285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.022317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.022583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.022611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.022965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.713 [2024-10-12 22:26:01.022993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.713 qpair failed and we were unable to recover it. 00:37:42.713 [2024-10-12 22:26:01.023333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.023364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.023702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.023730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.023954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.023981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.024188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.024217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.024584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.024613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.024973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.025002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.025365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.025395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.025723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.025752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.025972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.026001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.026190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.026219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.026588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.026617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.026936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.026966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.027344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.027375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.027719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.027749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.027881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.027909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.028277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.028307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.028671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.028700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.029051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.029078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.029460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.029490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.029805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.029834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.030202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.030231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.030487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.030518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.030901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.030929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.031253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.031282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.031638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.031666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.032025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.032053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.032430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.032459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.032673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.032701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.032927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.032954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.033357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.033387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.033600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.033627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.033968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.033997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.034441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.034470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.034849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.034878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.035244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.035280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.035639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.035668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.036020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.036048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.036408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.036437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.714 [2024-10-12 22:26:01.036805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.714 [2024-10-12 22:26:01.036833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.714 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.037049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.037076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.037448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.037478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.037828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.037857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.038201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.038230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.038585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.038613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.039011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.039039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.039356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.039385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.039653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.039682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.039921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.039950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.040313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.040344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.040652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.040681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.041013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.041041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.041254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.041282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.041675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.041703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.042057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.042085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.042448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.042476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.042787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.042815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.043079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.043122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.043475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.043503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.043867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.043895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.044144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.044173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.044402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.044429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.044695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.044724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.045055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.045084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.045467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.045496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.045862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.045892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.046253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.046282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.046633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.046662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.046888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.046916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.047135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.047165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.047515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.047543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.047911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.047939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.048253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.048283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.048591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.048619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.048898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.048927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.049030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.049064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.049473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.715 [2024-10-12 22:26:01.049504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.715 qpair failed and we were unable to recover it. 00:37:42.715 [2024-10-12 22:26:01.049856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.049885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.050271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.050322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.050692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.050720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.051091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.051128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.051486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.051515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.051893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.051922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.052034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.052061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.052426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.052455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.052833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.052860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.053191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.053220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.053501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.053529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.053854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.053881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.054121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.054152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.054575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.054604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.054864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.054894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.055135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.055163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.055555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.055583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.055930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.055958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.056323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.056352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.056574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.056602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.056812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.056839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.057190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.057220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.057593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.057622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.057990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.058019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.058357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.058386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.058739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.058768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.059133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.059162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.059523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.059551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.059780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.059808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.060226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.060255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.060595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.060623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.060984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.061011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.061230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.061259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.061630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.061658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.061997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.062025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.062231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.062259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.062468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.062496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.062778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.062807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.063193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.063228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.063594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.716 [2024-10-12 22:26:01.063623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.716 qpair failed and we were unable to recover it. 00:37:42.716 [2024-10-12 22:26:01.063969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.063998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.064288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.064318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.064652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.064680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.065044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.065073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.065479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.065509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.065741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.065769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.066137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.066167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.066517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.066544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.066917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.066945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.067122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.067151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.067415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.067443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.067780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.067807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.067937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.067964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.068184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.068213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.068457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.068484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.068825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.068854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.069077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.069122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.069394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.069424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.069653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.069681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.070086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.070128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.070511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.070540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.070903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.070931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.071159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.071188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.071498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.071527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.071896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.071924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.072317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.072347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.072549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.072577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.072917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.072946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.073284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.073314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.073552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.073581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.073946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.073976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.074180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.717 [2024-10-12 22:26:01.074209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.717 qpair failed and we were unable to recover it. 00:37:42.717 [2024-10-12 22:26:01.074565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.074593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.074951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.074980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.075341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.075371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.075702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.075731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.076113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.076142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.076501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.076529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.076892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.076925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.077286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.077316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.077665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.077701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.078035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.078064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.078284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.078312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.078621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.078649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.078884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.078911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.079269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.079299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.079685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.079712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.080036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.080065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.080424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.080453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.080859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.080886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.081199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.081228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.081451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.081480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.081844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.081872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.082296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.082324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.082565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.082594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.082951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.082979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.083329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.083365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.083737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.083764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.084137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.084166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.084518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.084545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.084757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.084785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.085010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.085043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.085302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.085332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.085668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.085697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.086052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.086080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.086312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.086342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.086684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.086711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.087077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.087117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.087323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.087352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.087693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.087721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.087952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.087980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.088334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.718 [2024-10-12 22:26:01.088364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.718 qpair failed and we were unable to recover it. 00:37:42.718 [2024-10-12 22:26:01.088688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.088716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.088963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.088991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.089351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.089381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.089491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.089522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.089923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.089951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.090185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.090217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.090434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.090469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.090830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.090857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.091095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.091134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.091507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.091536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.091896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.091924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.092146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.092175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.092527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.092555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.092915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.092944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.093289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.093319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.093679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.093707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.093909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.093936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.094298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.094328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.094544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.094571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.094691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.094718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.095124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.095155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.095480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.095509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.095773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.095800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.095922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.095948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.096318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.096348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.096713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.096742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.097117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.097147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.097516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.097545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.097878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.097905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.098289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.098318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.098683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.098712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.098934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.098962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.099162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.099190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.099570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.099600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.099943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.099974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.100293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.100323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.100732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.100761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.101139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.719 [2024-10-12 22:26:01.101168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.719 qpair failed and we were unable to recover it. 00:37:42.719 [2024-10-12 22:26:01.101453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.101481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.101724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.101751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.102080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.102118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.102438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.102466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.102854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.102882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.103237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.103265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.103620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.103648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.103996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.104025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.104276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.104312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.104656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.104686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.104900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.104928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.105271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.105300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.105617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.105646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.106013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.106040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.106375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.106407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.106619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.106648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.106968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.106996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.107366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.107395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.107788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.107817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.108185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.108215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.108430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.108458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.108831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.108860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.108974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.109001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.109100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.109138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.109473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.109501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.109873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.109901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.110230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.110259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.110643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.110670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.111041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.111069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.111280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.111308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.111648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.111676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.111898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.111926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.112166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.112196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.112554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.112582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.112936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.112964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.113194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.113224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.720 [2024-10-12 22:26:01.113576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.720 [2024-10-12 22:26:01.113604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.720 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.113695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.113722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.114039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.114066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.114272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.114301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.114637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.114665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.114966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.114994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.115383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.115411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.115731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.115759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.115853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.115880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.116209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.116238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.116592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.116620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.116728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.116758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.116983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.117017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.117385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.117415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.117627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.117656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.118033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.118061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.118413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.118441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.118832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.118860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.119115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.119143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.119541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.119568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.119895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.119923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.120268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.120298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.120651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.120679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.120883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.120911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.121117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.121147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.121478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.121506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.121860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.121888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.122236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.122265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.122572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.122600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.122950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.122978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.123216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.123245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.123607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.123635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.123983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.124011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.124406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.721 [2024-10-12 22:26:01.124435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.721 qpair failed and we were unable to recover it. 00:37:42.721 [2024-10-12 22:26:01.124790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.124818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.125144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.125173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.125484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.125511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.125864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.125893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.126238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.126267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.126522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.126550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.126749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.126777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.127034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.127060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.127463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.127492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.127860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.127888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.128245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.128274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.128621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.128649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.128875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.128904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.129222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.129252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.129597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.129625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.129991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.130018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.130386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.130415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.130641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.130667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.131028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.131055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.131416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.131446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.131803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.131830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.132210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.132241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.132591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.132619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.132968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.132995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.133348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.133376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.133594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.133622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.133829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.133856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.134078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.134115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.134468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.134497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.134856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.134883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.135248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.135276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.135644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.135672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.136038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.136066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.136436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.136467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.136662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.136690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.137037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.137065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.722 qpair failed and we were unable to recover it. 00:37:42.722 [2024-10-12 22:26:01.137402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.722 [2024-10-12 22:26:01.137432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.137782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.137810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.138158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.138188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.138536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.138563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.138903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.138930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.139303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.139331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.139684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.139712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.139948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.139975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.140336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.140366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.140739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.140773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.141111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.141140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.141477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.141505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.141852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.141880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.142239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.142267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.142506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.142534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.142626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.142653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.142983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.143010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.143225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.143257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.143604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.143633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.143991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.144018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.144377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.144407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.144644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.144672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.145013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.145042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.145471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.145501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.145840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.145868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.146086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.146124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.146451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.146478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.146831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.146860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.147224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.147253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.147479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.147506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.147852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.147880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.148245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.148275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.148622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.148650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.148848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.148876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.149203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.149233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.149570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.149598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.149949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.149977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.723 [2024-10-12 22:26:01.150324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.723 [2024-10-12 22:26:01.150354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.723 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.150598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.150625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.150982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.151010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.151397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.151427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.151794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.151822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.152054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.152088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.152451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.152480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.152828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.152857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.153191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.153236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.153514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.153542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.153652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.153683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.153904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.153940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.154299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.154343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.154691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.154719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.155081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.155121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.155353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.155383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.155729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.155758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.156141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.156171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.156536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.156565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.156815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.156843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.157096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.157137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.157359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.157393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.157753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.157781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.158135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.158165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.158514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.158542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.158890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.158918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.159283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.159312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.159663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.159692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.159927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.159955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.160153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.160183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.160540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.160569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.160908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.160936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.161315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.161344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.161568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.161596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.161828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.161856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.162193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.162222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.162589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.162617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.162938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.162966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.163332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.163360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.163722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.724 [2024-10-12 22:26:01.163750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.724 qpair failed and we were unable to recover it. 00:37:42.724 [2024-10-12 22:26:01.163981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.164011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.164248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.164277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.164500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.164532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.164754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.164781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.165002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.165030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.165282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.165313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.165674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.165702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.166051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.166079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.166441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.166469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.166706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.166734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.166938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.166966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.167199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.167229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.167324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.167358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.167553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.167581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.167916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.167943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.168330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.168360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.168715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.168744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.169111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.169141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.169343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.169370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.169593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.169620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.169952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.169979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.170278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.170306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.170655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.170683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.171051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.171079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.171490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.171519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.171859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.171886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.172237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.172268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.172657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.172685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.173044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.173073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.173423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.173453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.173768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.173795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.174154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.174183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.174403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.174435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.174771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.174800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.174998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.175026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.175390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.175419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.175639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.725 [2024-10-12 22:26:01.175668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.725 qpair failed and we were unable to recover it. 00:37:42.725 [2024-10-12 22:26:01.175909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.175942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.176244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.176274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.176648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.176677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.177120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.177149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.177492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.177519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.177745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.177772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.177980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.178008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.178446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.178474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.726 [2024-10-12 22:26:01.178837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.726 [2024-10-12 22:26:01.178866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.726 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.179179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.179211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.179575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.179604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.179803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.179830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.180187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.180217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.180523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.180552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.180898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.180926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.181285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.181321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.181669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.181697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.182053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.182083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.182467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.182497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.182844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.182871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.183222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.183251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.183477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.183507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.183752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.183781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.184139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.184169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.184522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.184550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.184857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.184886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.185257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.185286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.185632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.185661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.185865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.185893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.186288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.186317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.186625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.993 [2024-10-12 22:26:01.186651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.993 qpair failed and we were unable to recover it. 00:37:42.993 [2024-10-12 22:26:01.187049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.187077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.187431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.187460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.187775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.187804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.187976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.188007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.188358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.188388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.188692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.188720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.189079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.189115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.189464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.189491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.189844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.189872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.190224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.190253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.190610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.190638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.190986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.191016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.191239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.191268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.191626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.191663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.192011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.192038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.192419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.192449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.192815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.192843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.193208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.193237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.193460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.193487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.193827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.193854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.194074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.194101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.194485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.194513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.194830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.194858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.195213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.195242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.195447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.195479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.195830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.195860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.196192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.196221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.196530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.196558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.196903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.196931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.197295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.197323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.197515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.197542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.197890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.197918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.198167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.198197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.198540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.198568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.198864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.198892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.199232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.199262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.199593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.199621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.994 qpair failed and we were unable to recover it. 00:37:42.994 [2024-10-12 22:26:01.199845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.994 [2024-10-12 22:26:01.199873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.200212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.200241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.200598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.200627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.200978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.201006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.201335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.201364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.201667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.201695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.202056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.202085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.202242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.202270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.202651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.202679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.202913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.202940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.203287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.203316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.203723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.203751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.204088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.204139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.204497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.204525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.204866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.204895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.205215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.205244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.205591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.205620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.205817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.205844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.205935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.205963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Read completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 Write completed with error (sct=0, sc=8) 00:37:42.995 starting I/O failed 00:37:42.995 [2024-10-12 22:26:01.206772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:42.995 [2024-10-12 22:26:01.207360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.207471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.207908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.207945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.208344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.208438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.208877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.208915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.209382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.209478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.209936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.209972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.210353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.210385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.210752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.210781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.995 [2024-10-12 22:26:01.210992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.995 [2024-10-12 22:26:01.211019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.995 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.211378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.211409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.211717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.211745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.212129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.212160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.212403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.212431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.212793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.212822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.213180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.213208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.213466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.213495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.213738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.213765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.214116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.214145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.214487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.214514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.214842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.214870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.215229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.215258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.215619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.215646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.215855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.215882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.216254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.216284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.216618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.216648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.216914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.216946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.217323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.217354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.217735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.217763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.218120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.218156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.218518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.218546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.218853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.218882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.219193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.219222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.219568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.219596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.219949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.219977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.220352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.220380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.220622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.220650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.220876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.220907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.221262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.221291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.221611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.221639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.221954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.221982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.222391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.222421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.222783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.222818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.223021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.223050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.223267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.223300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.223544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.223573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.223887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.996 [2024-10-12 22:26:01.223914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.996 qpair failed and we were unable to recover it. 00:37:42.996 [2024-10-12 22:26:01.224263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.224293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.224642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.224670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.225014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.225042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.225406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.225436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.225664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.225691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.226098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.226140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.226486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.226516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.226849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.226878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.227224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.227253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.227634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.227664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.228007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.228035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.228303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.228332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.228671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.228700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.228933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.228960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.229068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.229099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.229365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.229394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.229723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.229751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.230111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.230140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.230483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.230511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.230866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.230896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.231259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.231288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.231611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.231638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.231788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.231822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.232194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.232225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.232580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.232607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.232827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.232854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.233260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.233290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.233639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.233667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.233983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.234011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.234382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.234412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.234633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.234661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.234868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.234896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.235217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.235246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.235479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.235507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.997 [2024-10-12 22:26:01.235868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.997 [2024-10-12 22:26:01.235896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.997 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.236256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.236284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.236641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.236670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.237027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.237055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.237411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.237440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.237572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.237607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.237931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.237960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.238324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.238352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.238700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.238728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.238942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.238973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.239279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.239309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.239703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.239731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.240068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.240099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.240444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.240473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.240823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.240853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.241198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.241228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.241625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.241653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.241950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.241979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.242278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.242306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.242642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.242669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.243007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.243035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.243384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.243415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.243760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.243789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.244140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.244169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.244398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.244426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.244788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.244816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.245159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.245188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.245279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.245305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8f8000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.245687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.245793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.246338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.246432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.246877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.246913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.247165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.247209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.247567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.247596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.247962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.247991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.248302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.248331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.248587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.248614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.998 qpair failed and we were unable to recover it. 00:37:42.998 [2024-10-12 22:26:01.248954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.998 [2024-10-12 22:26:01.248982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.249399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.249428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.249769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.249797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.250178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.250208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.250543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.250571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.250911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.250940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.251325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.251356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.251568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.251595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.251960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.251988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.252348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.252379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.252752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.252780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.253005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.253033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.253405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.253435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.253770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.253798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.254144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.254174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.254534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.254563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.254900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.254929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.255194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.255224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.255446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.255475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.255680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.255708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.256081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.256138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.256485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.256513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.256877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.256905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.257228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.257257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.257527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.257564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.257885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.257912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.258256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.258285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.258487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.258515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.258856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.258883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.259247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.259276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.259620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.259649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.260030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.260057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.260415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.260453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 [2024-10-12 22:26:01.260545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.999 [2024-10-12 22:26:01.260572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8fc000b90 with addr=10.0.0.2, port=4420 00:37:42.999 qpair failed and we were unable to recover it. 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Write completed with error (sct=0, sc=8) 00:37:42.999 starting I/O failed 00:37:42.999 Read completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Read completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Read completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Read completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Read completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 Write completed with error (sct=0, sc=8) 00:37:43.000 starting I/O failed 00:37:43.000 [2024-10-12 22:26:01.261339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.000 [2024-10-12 22:26:01.261739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.261796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.262173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.262224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.262442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.262472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.262825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.262853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.263211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.263242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.263602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.263630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.263843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.263872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.264191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.264221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.264559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.264587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.264727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.264761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.265128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.265158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.265366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.265394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.265600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.265629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.265931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.265958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.266318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.266348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.266679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.266709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.267046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.267075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.267449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.267478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.267821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.267852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.268212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.268242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.268589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.268617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.269037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.269065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.269275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.269305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.269651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.269679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.270016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.270045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.270252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.270283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.270617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.270647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.270996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.271024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.271254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.271285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.271616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.271644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.271950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.271978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.272291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.272327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.000 [2024-10-12 22:26:01.272677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.000 [2024-10-12 22:26:01.272707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.000 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.272903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.272932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.273280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.273309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.273679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.273708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.273915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.273944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.274296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.274328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.274683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.274711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.275060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.275091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.275438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.275467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.275788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.275819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.276159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.276190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.276413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.276445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.276814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.276843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.277116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.277147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.277458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.277486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.277813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.277842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.278196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.278227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.278477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.278508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.278753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.278781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.279022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.279052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.279396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.279425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.279798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.279828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.280167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.280197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.280546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.280574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.280913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.280941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.281316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.281345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.281680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.281709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.281925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.281955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.282189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.282219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.282590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.282624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.282868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.282895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.283086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.283124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.283452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.283482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.283818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.283846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.284217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.284247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.284587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.284615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.284956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.284986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.285203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.285232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.001 [2024-10-12 22:26:01.285594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.001 [2024-10-12 22:26:01.285621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.001 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.285958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.285993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.286224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.286255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.286478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.286508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.286720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.286749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.287062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.287088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.287373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.287401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.287741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.287769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.288129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.288159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.288511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.288539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.288769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.288797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.289055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.289084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.289452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.289480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.289794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.289823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.290176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.290206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.290570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.290600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.290968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.290996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.291332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.291362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.291550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.291577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.291835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.291865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.292093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.292132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.292377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.292404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.292750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.292778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.293152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.293181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.293425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.293452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.293836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.293864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.294219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.294250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.294616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.294644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.294997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.295032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.295356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.295386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.295779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.002 [2024-10-12 22:26:01.295808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.002 qpair failed and we were unable to recover it. 00:37:43.002 [2024-10-12 22:26:01.296046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.296074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.296409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.296438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.296782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.296813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.297133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.297164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.297497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.297525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.297873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.297900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.298089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.298126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.298506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.298534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.298871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.298900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.299227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.299256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.299677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.299707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.299971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.299999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.300404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.300434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.300773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.300800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.300899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.300928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.301259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.301290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.301691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.301720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.302069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.302097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.302447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.302477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.302815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.302844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.303070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.303097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.303317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.303346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.303692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.303721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.303971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.304000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.304245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.304275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.304614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.304642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.304995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.305023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.305259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.305289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.305638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.305666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.305787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.305816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.306189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.306219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.306642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.306670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.306994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.307024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.307405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.307434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.307748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.307779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.307997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.308025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.308374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.308405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.308734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.308769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.308992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.003 [2024-10-12 22:26:01.309021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.003 qpair failed and we were unable to recover it. 00:37:43.003 [2024-10-12 22:26:01.309364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.309394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.309748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.309777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.310120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.310150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.310313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.310342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.310676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.310703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.311045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.311074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.311425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.311455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.311755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.311783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.312156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.312184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.312279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.312306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.312517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.312545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.312891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.312919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.313260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.313292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.313528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.313556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.313867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.313896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.314253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.314283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.314629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.314656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.314880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.314908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.315257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.315288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.315587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.315617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.315953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.315982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.316308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.316338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.316672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.316701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.316925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.316953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.317192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.317222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.317592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.317621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.317948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.317977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.318357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.318386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.318580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.318609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.318971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.319001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.319335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.319364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.319565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.319596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.319951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.319979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.320236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.320265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.320512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.320539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.320914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.320942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.321325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.321355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.321566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.004 [2024-10-12 22:26:01.321597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.004 qpair failed and we were unable to recover it. 00:37:43.004 [2024-10-12 22:26:01.321953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.321995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.322339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.322368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.322577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.322605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.322821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.322850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.323202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.323231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.323460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.323488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.323860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.323888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.324241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.324270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.324593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.324622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.324937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.324963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.325316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.325346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.325562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.325590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.325935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.325962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.326317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.326347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.326582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.326609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.326829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.326857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.327193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.327222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.327548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.327577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.327941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.327968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.328330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.328358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.328737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.328765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.329001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.329028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.329373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.329403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.329749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.329778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.329987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.330015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.330252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.330281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.330608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.330636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.330849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.330877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.331247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.331277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.331619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.331647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.331985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.332013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.332372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.332401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.332751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.332779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.333149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.333177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.333511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.333540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.333781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.333809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.334169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.334198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.334543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.334571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.334908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.005 [2024-10-12 22:26:01.334936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.005 qpair failed and we were unable to recover it. 00:37:43.005 [2024-10-12 22:26:01.335281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.335311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.335514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.335547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.335891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.335920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.336258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.336288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.336514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.336542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.336896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.336923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.337222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.337253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.337480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.337508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.337835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.337864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.338180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.338209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.338554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.338583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.338938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.338966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.339217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.339248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.339652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.339681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.340041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.340069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.340289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.340320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.340667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.340698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.341030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.341066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.341394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.341425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.341786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.341815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.342161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.342191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.342534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.342562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.342858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.342888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.343226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.343256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.343584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.343613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.343946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.343976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.344319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.344349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.344695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.344723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.344952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.344980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.345226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.345256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.345601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.345630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.345963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.345992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.346203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.346232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.346589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.346617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.346843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.346871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.347122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.347152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.006 qpair failed and we were unable to recover it. 00:37:43.006 [2024-10-12 22:26:01.347373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.006 [2024-10-12 22:26:01.347404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.347683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.347712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.348059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.348087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.348323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.348351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.348692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.348720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.349069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.349111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.349457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.349486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.349743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.349771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.349977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.350005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.350236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.350265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.350629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.350658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.351006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.351033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.351294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.351323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.351667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.351695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.351920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.351947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.352284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.352314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.352534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.352563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.352785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.352816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.353009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.353037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.353393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.353424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.353766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.353794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.354150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.354179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.354515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.354544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.354902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.354931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.355259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.355289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.355614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.355642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.355884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.355912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.356121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.356150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.356459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.356487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.356854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.356881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.357178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.357207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.357461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.357490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.357862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.007 [2024-10-12 22:26:01.357890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.007 qpair failed and we were unable to recover it. 00:37:43.007 [2024-10-12 22:26:01.358243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.358272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.358476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.358504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.358862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.358890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.359150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.359179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.359391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.359418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.359742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.359769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.360126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.360155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.360413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.360440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.360838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.360867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.361169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.361217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.361570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.361598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.361838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.361865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.362129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.362165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.362511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.362539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.362895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.362923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.363349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.363379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.363539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.363568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.363940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.363969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.364247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.364276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.364512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.364540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.364768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.364796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.365150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.365179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.365530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.365558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.365917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.365945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.366191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.366220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.366535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.366563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.366823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.366851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.367111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.367140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.367555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.367583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.367949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.367978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.368227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.368259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.368485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.368513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.368890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.368919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.369158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.369186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.369577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.369606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.008 qpair failed and we were unable to recover it. 00:37:43.008 [2024-10-12 22:26:01.369959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.008 [2024-10-12 22:26:01.369988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.370360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.370389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.370727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.370756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.370944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.370973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.371316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.371346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.371690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.371717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.372110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.372139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.372366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.372394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.372657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.372686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.372912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.372942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.373294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.373323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.373705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.373733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.374115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.374145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.374387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.374415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.374667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.374695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.375048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.375076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.375419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.375455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.375819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.375854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.376182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.376211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.376481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.376509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.376746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.376774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.377120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.377149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.377343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.377372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.377615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.377643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.377981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.378009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.378354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.378383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.378740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.378769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.379129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.379159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.379508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.379536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.379754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.379785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.380141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.380171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.380540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.380569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.380911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.380940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.381322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.381351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.381562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.381589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.381935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.381963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.382332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.382363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.382719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.382746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.383057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.383084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.383446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.383476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.009 [2024-10-12 22:26:01.383690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.009 [2024-10-12 22:26:01.383718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.009 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.383922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.383950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.384180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.384209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.384343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.384371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.384707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.384737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.385061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.385091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.385455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.385484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.385747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.385775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.386111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.386141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.386367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.386395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.386691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.386719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.387057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.387085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.387438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.387468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.387709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.387736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.388082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.388121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.388449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.388478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.388689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.388716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.388936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.388976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.389171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.389200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.389625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.389652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.389924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.389951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.390309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.390337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.390675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.390704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.391053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.391082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.391436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.391464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.391781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.391807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.392158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.392187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.392411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.392438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.392652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.392680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.393096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.393137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.393481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.393510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.393769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.393797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.394159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.394194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.394535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.394562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.394779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.394807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.395036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.395064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.395195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.395226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.395510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.395537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.395884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.395913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.010 qpair failed and we were unable to recover it. 00:37:43.010 [2024-10-12 22:26:01.396251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.010 [2024-10-12 22:26:01.396279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.396615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.396643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.396839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.396868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.397183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.397230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.397582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.397612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.397955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.397985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.398332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.398361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.398658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.398686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.398906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.398936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.399164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.399193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.399541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.399570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.399802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.399830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.400037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.400065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.400445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.400474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.400827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.400855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.401167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.401195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.401391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.401418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.401738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.401767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.402078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.402123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.402384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.402414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.402752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.402780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.403144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.403175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.403520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.403549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.403860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.403890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.404229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.404257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.404612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.404640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.405006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.405035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.405408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.405437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.405797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.405825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.405934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.405965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.406174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.406203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.406435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.406463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.406812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.406841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.407195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.407225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.407597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.407626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.011 qpair failed and we were unable to recover it. 00:37:43.011 [2024-10-12 22:26:01.407847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.011 [2024-10-12 22:26:01.407875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.408087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.408127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.408472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.408500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.408702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.408730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.409064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.409092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.409435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.409463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.409665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.409692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.410038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.410065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.410473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.410503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.410715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.410743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.410876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.410906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.411137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.411168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.411526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.411554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.411768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.411796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.412145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.412174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.412517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.412545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.412895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.412925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.413277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.413306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.413650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.413679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.414035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.414064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.414307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.414336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.414662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.414690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.415021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.415050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.415166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.415205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.415566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.415597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.415801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.415831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.416125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.416155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.416501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.416528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.416856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.416885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.417143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.417171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.417509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.417539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.417883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.417912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.418253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.418282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.418618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.418646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.418867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.418894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.419098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.419134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.419352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.419380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.419707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.419736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.420043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.420072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.420286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.420316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.012 qpair failed and we were unable to recover it. 00:37:43.012 [2024-10-12 22:26:01.420556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.012 [2024-10-12 22:26:01.420584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.420922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.420950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.421133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.421162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.421437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.421467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.421810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.421838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.422067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.422096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.422436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.422465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.422796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.422824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.422917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.422944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.423146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.423175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.423536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.423564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.423858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.423886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.424188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.424217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.424439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.424467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.424694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.424725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.424977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.425005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.425127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.425156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.425496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.425525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.425869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.425898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.426276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.426304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.426541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.426571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.426934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.426962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.427375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.427404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.427621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.427657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.427953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.427981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.428237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.428266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.428622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.428652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.428971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.429002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.429360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.429390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.429737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.429765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.430128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.430157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.430492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.430520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.430745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.430774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.431129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.431158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.431515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.431544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.431912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.431940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.432341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.432369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.432725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.432754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.013 [2024-10-12 22:26:01.433112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.013 [2024-10-12 22:26:01.433142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.013 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.433549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.433578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.433786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.433816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.434151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.434181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.434524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.434553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.434898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.434926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.435299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.435328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.435682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.435711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.436045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.436074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.436476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.436505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.436814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.436841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.437061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.437089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.437457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.437487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.437842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.437871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.438168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.438198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.438398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.438426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.438633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.438661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.439026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.439054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.439379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.439407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.439763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.439791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.440136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.440166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.440517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.440546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.440854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.440882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.441249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.441279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.441512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.441542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.441735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.441771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.442120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.442150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.442490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.442518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.442835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.442863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.443072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.443101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.443332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.443360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.443696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.443724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.443932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.443961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.444313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.444343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.444543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.444571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.444870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.444899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.445245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.445275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.445628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.445655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.445950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.014 [2024-10-12 22:26:01.445977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.014 qpair failed and we were unable to recover it. 00:37:43.014 [2024-10-12 22:26:01.446363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.446392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.446716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.446747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.446948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.446976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.447251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.447279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.447474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.447502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.447846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.447874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.448220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.448250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.448444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.448473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.448686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.448713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.449076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.449124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.449468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.449497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.449747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.449774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.449974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.450000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.450363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.450393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.450742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.450769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.451119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.451147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.451475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.451504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.451835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.451863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.452068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.452096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.452448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.452477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.452814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.452842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.453065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.453093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.453335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.453363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.453632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.453659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.453858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.453888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.454236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.454265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.454610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.454649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.454986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.455014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.455350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.455379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.455722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.455752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.456096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.456132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.456440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.456467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.456683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.456711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.456924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.456951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.457165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.457194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.457514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.457542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.457890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.457918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.458297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.458326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.458657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.458686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.458996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.015 [2024-10-12 22:26:01.459023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.015 qpair failed and we were unable to recover it. 00:37:43.015 [2024-10-12 22:26:01.459144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.459175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.459497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.459526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.459771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.459801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.460055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.460084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.460295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.460324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.460664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.460692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.461055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.461082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.461444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.461473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.461818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.461845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.462188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.462218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.462554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.462584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.462925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.462953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.463311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.463339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.463549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.463583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.463947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.463975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.464232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.464260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.464621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.464649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.464999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.465027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.465364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.465392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.465700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.465728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.466085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.466143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.466465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.466494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.466844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.466871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.467214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.467245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.467445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.467473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.467692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.467721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.468072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.468100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.468412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.468440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.468791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.468820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.469162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.469194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.469542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.469570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.469882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.469911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.470160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.470189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.470393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.016 [2024-10-12 22:26:01.470420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.016 qpair failed and we were unable to recover it. 00:37:43.016 [2024-10-12 22:26:01.470636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.470666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.471006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.471035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.471402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.471433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.471773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.471802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.472150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.472180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.472411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.472439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.472800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.472829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.473189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.473218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.017 [2024-10-12 22:26:01.473564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.017 [2024-10-12 22:26:01.473592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.017 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.473893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.473922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.474138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.474167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.474369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.474396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.474740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.474768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.475063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.475091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.475334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.475363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.475678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.475706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.475963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.475991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.476355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.476385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.476734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.476762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.477113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.477148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.477495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.477523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.477768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.289 [2024-10-12 22:26:01.477798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.289 qpair failed and we were unable to recover it. 00:37:43.289 [2024-10-12 22:26:01.478137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.478167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.478500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.478529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.478877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.478905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.479219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.479248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.479592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.479620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.479950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.479979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.480193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.480222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.480538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.480567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.480899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.480927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.481278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.481307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.481517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.481547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.481907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.481936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.482043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.482074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.482474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.482503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.482849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.483200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.483229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.483560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.483589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.483815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.483842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.484176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.484206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.484547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.484576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.484906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.484934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.485302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.485331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.485675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.485704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.485997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.486025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.486256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.486285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.486623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.486652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.486875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.486904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.487157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.487188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.487491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.487519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.487859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.487888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.488236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.488267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.488591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.488619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.488847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.488876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.489193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.489222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.489526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.290 [2024-10-12 22:26:01.489555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.290 qpair failed and we were unable to recover it. 00:37:43.290 [2024-10-12 22:26:01.489899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.489927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.490260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.490289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.490628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.490664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.490894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.490922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.491148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.491178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.491386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.491413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.491727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.491757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.492087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.492125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.492500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.492527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.492869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.492898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.493250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.493280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.493611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.493640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.494002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.494030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.494381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.494416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.494758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.494786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.495090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.495128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.495474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.495504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.495809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.495837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.496166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.496195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.496435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.496465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.496758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.496786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.497132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.497162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.497510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.497539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.497884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.497912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.498258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.498285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.498603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.498631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.498975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.499003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.499343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.499373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.499709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.499737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.500083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.500121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.500489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.500517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.500867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.500896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.501122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.501151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.501496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.501539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.501884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.501913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.502264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.502292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.502640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.502669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.502874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.502902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.503258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.291 [2024-10-12 22:26:01.503287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.291 qpair failed and we were unable to recover it. 00:37:43.291 [2024-10-12 22:26:01.503510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.503537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.503897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.503926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.504277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.504306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.504650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.504684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.504886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.504913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.505156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.505188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.505499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.505527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.505841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.505870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.506065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.506093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.506445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.506474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.506837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.506867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.507059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.507088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.507438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.507468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.507803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.507831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.508169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.508198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.508520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.508548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.508914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.508942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.509143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.509173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.509518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.509547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.509931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.509960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.510312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.510341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.510687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.510716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.511066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.511094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.511334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.511367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.511542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.511570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.511935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.511963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.512323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.512351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.512675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.512703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.513068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.513097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.513446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.513474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.513831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.513860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.513966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.513996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.514338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.514369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.514719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.514747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.515117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.515146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.515531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.515559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.515785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.515812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.516154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.516183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.516514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.516543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.292 qpair failed and we were unable to recover it. 00:37:43.292 [2024-10-12 22:26:01.516875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.292 [2024-10-12 22:26:01.516904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.517116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.517146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.517374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.517401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.517609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.517637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.517870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.517905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.518226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.518255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.518486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.518514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.518745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.518772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.519092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.519129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.519446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.519475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.519707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.519736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.520075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.520124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.520459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.520488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.520819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.520848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.521184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.521214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.521547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.521576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.521802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.521833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.522161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.522189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.522393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.522422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.522774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.522801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.523132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.523160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.523499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.523527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.523862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.523891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.524100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.524144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.524488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.524516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.524717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.524744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.525100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.525137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.525469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.525497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.525863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.525892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.526122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.526151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.526343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.526370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.526742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.526770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.526976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.527006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.527351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.527379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.527750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.527779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.528116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.528146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.528476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.528505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.528715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.528744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.529083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.529120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.529460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.293 [2024-10-12 22:26:01.529487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.293 qpair failed and we were unable to recover it. 00:37:43.293 [2024-10-12 22:26:01.529707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.529735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.530087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.530126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.530429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.530464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.530712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.530740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.531075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.531119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.531484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.531513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.531853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.531881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.532234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.532264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.532615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.532644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.532856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.532886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.533228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.533258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.533456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.533484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.533656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.533685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.534024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.534052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.534383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.534412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.534762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.534790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.535141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.535169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.535573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.535601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.535876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.535904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.536132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.536162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.536387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.536416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.536746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.536774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.537064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.537094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.537441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.537470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.537780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.537809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.538011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.538040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.538343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.538374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.538585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.538613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.538956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.538986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.539187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.539216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.539516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.539544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.539882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.539910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.540263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.540293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.540646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.540674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.294 qpair failed and we were unable to recover it. 00:37:43.294 [2024-10-12 22:26:01.541014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.294 [2024-10-12 22:26:01.541043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.541378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.541408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.541759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.541787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.542120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.542149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.542468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.542496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.542708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.542736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.542962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.542991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.543200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.543229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.543533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.543562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.543902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.543930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.544267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.544303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.544633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.544662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.544897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.544926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.545159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.545191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.545529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.545556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.545794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.545824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.546163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.546193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.546522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.546550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.546750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.546778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.547166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.547195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.547528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.547556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.547899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.547929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.548299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.548329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.548674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.548702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.549037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.549067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.549355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.549387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.549735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.549764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.549983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.550010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.550365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.550394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.550748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.550778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.551117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.551146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.551394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.551421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.551765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.551794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.552122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.552150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.552410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.552442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.552766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.552803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.553143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.553173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.553296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.553324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.553688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.553716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.554067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.295 [2024-10-12 22:26:01.554096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.295 qpair failed and we were unable to recover it. 00:37:43.295 [2024-10-12 22:26:01.554340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.554368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.554703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.554731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.555099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.555137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.555497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.555525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.555822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.555850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.555941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.555968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.556308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.556337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.556703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.556731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.556938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.556966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.557316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.557346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.557695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.557729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.558134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.558162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.558456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.558484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.558832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.558860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.559204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.559232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.559564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.559592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.559951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.559979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.560223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.560251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.560575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.560603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.560838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.560868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.561209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.561238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.561574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.561602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.561811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.561838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.562040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.562067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.562400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.562429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.562646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.562673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.563014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.563041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.563355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.563384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.563723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.563751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.564005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.564032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.564248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.564277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.564504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.564530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.564890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.564917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.565130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.565163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.565514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.565543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.565776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.565805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.566151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.566181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.566534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.566563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.566697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.296 [2024-10-12 22:26:01.566729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.296 qpair failed and we were unable to recover it. 00:37:43.296 [2024-10-12 22:26:01.567084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.567137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.567468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.567497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.567710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.567741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.568065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.568093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.568444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.568474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.568786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.568815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.569032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.569060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.569453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.569483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.569824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.569852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.570143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.570175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.570502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.570530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.570874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.570910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.571238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.571268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.571610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.571639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.571982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.572010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.572341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.572370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.572701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.572729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.573066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.573095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.573312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.573341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.573577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.573609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.573950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.573978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.574191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.574221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.574434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.574462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.574803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.574832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.575060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.575088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.575444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.575473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.575803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.575831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.576198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.576227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.576577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.576605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.576796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.576824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.577123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.577153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.577370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.577400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.577735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.577763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.578136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.578167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.578507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.578535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.578732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.578759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.579038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.579066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.579309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.579342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.579584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.579613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.579848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.297 [2024-10-12 22:26:01.579877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.297 qpair failed and we were unable to recover it. 00:37:43.297 [2024-10-12 22:26:01.580287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.580317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.580615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.580642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.580997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.581025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.581249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.581278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.581492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.581521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.581867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.581895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.582231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.582261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.582586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.582615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.582973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.583001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.583351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.583381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.583708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.583737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.584081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.584126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.584464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.584492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.584829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.584858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.585053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.585081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.585430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.585459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.585592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.585620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.585727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.585756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.586078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.586124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.586485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.586515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.586747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.586775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.586978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.587006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.587209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.587238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.587590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.587618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.587971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.587999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.588209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.588238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.588337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.588366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.588671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.588700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.588912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.588940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.589280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.589309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.589650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.589683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.590046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.590075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.590297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.590328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.590665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.298 [2024-10-12 22:26:01.590694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.298 qpair failed and we were unable to recover it. 00:37:43.298 [2024-10-12 22:26:01.590902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.590929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.591276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.591306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.591674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.591702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.592063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.592090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.592474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.592505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.592878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.592908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.593125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.593154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.593546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.593575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.593770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.593798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.594173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.594202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.594434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.594461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.594805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.594834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.595177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.595207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.595551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.595580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.595926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.595954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.596179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.596207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.596451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.596478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.596835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.596871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.597214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.597243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.597475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.597504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.597833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.597863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.598220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.598249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.598462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.598489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.598691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.598718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.599056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.599084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.599426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.599455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.599802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.599831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.600079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.600115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.600462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.600490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.600841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.600870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.601122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.601152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.601536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.601565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.601879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.601908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.602282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.602310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.602662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.602691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.603067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.299 [2024-10-12 22:26:01.603095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.299 qpair failed and we were unable to recover it. 00:37:43.299 [2024-10-12 22:26:01.603458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.603486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.603838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.603865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.604083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.604127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.604490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.604518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.604761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.604790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.605137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.605166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.605399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.605427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.605656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.605685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.606038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.606067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.606305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.606334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.606679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.606708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.606917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.606944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.607293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.607322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.607660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.607689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.607894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.607922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.608236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.608264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.608610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.608639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.608996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.609025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.609370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.609400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.609729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.609757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.610097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.610133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.610500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.610534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.610765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.610793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.611139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.611170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.611388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.611415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.611734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.611762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.611969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.611997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.612342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.612372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.612680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.612708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.612946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.612974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.613322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.613352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.613744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.613773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.613908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.613935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.614261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.614291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.614626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.614655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.614898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.614930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.615133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.615161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.615436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.300 [2024-10-12 22:26:01.615464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.300 qpair failed and we were unable to recover it. 00:37:43.300 [2024-10-12 22:26:01.615659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.615687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.616052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.616080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.616297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.616324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.616667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.616695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.617033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.617061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.617329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.617360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.617698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.617727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.618071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.618099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.618349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.618377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.618742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.618770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.619113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.619142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.619372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.619402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.619622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.619652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.620002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.620030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.620272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.620304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.620510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.620539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.620761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.620792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.621148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.621178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.621524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.621553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.621909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.621937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.622194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.622222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.622567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.622596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.622898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.622927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.623259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.623294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.623621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.623650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.623982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.624010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.624385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.624414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.624749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.624776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.624980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.625008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.625207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.625235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.625571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.625599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.625950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.625979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.626363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.626392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.626713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.626744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.627037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.627065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.627412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.627441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.627787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.627814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.628152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.628182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.628522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.301 [2024-10-12 22:26:01.628550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.301 qpair failed and we were unable to recover it. 00:37:43.301 [2024-10-12 22:26:01.628773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.628801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.629132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.629161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.629427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.629454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.629744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.629773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.630018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.630049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.630291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.630321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.630635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.630663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.630898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.630925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.631317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.631345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.631580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.631610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.631965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.631993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.632373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.632402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.632750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.632778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.633000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.633028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.633263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.633291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.633641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.633669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.634004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.634031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.634250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.634279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.634629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.634657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.634991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.635018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.635267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.635297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.635621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.635650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.636002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.636030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.636373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.636402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.636760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.636788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.637119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.637149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.637376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.637407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.637599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.637626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.637867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.637895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.638243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.638271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.638617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.638645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.638939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.638965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.639315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.639343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.639690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.639717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.640078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.640115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.640315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.640344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.640629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.640657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.641048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.641076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.641437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.641468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.641792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.302 [2024-10-12 22:26:01.641821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.302 qpair failed and we were unable to recover it. 00:37:43.302 [2024-10-12 22:26:01.642042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.642073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.642302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.642331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.642677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.642706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.643004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.643032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.643287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.643316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.643705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.643733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.644093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.644128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.644431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.644458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.644665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.644693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.645066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.645094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.645437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.645466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.645710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.645747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.646070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.646099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.646340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.646371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.646705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.646733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.646949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.646977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.647397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.647426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.647635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.647662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.647900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.647927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.648162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.648189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.648566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.648593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.648966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.648993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.649226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.649255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.649479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.649513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.649862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.649890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.650289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.650318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.650529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.650558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.650779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.650807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.651138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.651167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.651546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.651575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.651920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.651948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.303 [2024-10-12 22:26:01.652187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.303 [2024-10-12 22:26:01.652217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.303 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.652560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.652587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.652931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.652959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.653211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.653239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.653595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.653623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.653854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.653882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.654225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.654254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.654623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.654652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.654993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.655021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.655230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.655260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.655508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.655535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.655882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.655911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.656225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.656254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.656605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.656634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.656984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.657012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:43.304 [2024-10-12 22:26:01.657242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.657273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:37:43.304 [2024-10-12 22:26:01.657567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.657595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:43.304 [2024-10-12 22:26:01.657947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.657975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:43.304 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.304 [2024-10-12 22:26:01.658318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.658361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.658673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.658702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.659013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.659047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.659405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.659435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.659652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.659680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.660028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.660055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.660320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.660351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.660694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.660723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.660933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.660961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.661301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.661332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.661564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.661591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.661806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.661835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.662041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.662072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.662420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.662456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.662769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.662799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.663020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.663048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.663276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.663304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.663664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.663694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.664003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.664031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.664394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.664423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.304 [2024-10-12 22:26:01.664830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.304 [2024-10-12 22:26:01.664858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.304 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.665139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.665169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.665526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.665553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.665922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.665952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.666310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.666340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.666662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.666691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.667003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.667032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.667354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.667384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.667699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.667729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.668072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.668100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.668455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.668483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.668821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.668850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.669202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.669231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.669452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.669480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.669816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.669846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.670127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.670157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.670501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.670529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.670778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.670807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.671017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.671050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.671406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.671436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.671786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.671818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.672069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.672099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.672454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.672483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.672689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.672718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.673070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.673098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.673360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.673390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.673731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.673759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.673965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.673994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.674232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.674264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.674479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.674507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.674737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.674765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.675098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.675134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.675470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.675499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.675854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.675888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.676128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.676363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.676392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.676794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.676822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.677167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.677197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.677537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.677566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.677915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.677943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.678154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.305 [2024-10-12 22:26:01.678184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.305 qpair failed and we were unable to recover it. 00:37:43.305 [2024-10-12 22:26:01.678561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.678590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.678946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.678976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.679177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.679207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.679436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.679464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.679731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.679758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.679944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.679974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.680314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.680345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.680594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.680626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.680949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.680978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.681306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.681336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.681551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.681578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.681903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.681930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.682283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.682313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.682655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.682684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.682885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.682912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.683117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.683147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.683515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.683544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.683862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.683890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.684257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.684287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.684580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.684613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.684958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.684986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.685170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.685200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.685539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.685568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.685785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.685814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.686009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.686036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.686292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.686322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.686670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.686700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.687041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.687069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.687413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.687443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.687635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.687663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.687892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.687920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.688240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.688269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.688516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.688546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.688895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.688924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.689268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.689298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.689650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.689678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.689898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.689925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.690269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.690299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.690616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.306 [2024-10-12 22:26:01.690644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.306 qpair failed and we were unable to recover it. 00:37:43.306 [2024-10-12 22:26:01.690957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.690985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.691318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.691347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.691539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.691567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.691914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.691942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.692284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.692313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.692662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.692690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.693039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.693068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.693338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.693368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.693601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.693631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.693958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.693987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.694339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.694369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.694721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.694749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.695091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.695127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.695473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.695501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.695843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.695872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.696204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.696233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.696567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.696596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.696943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.696978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.697271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.697300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.697506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.697532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.697758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.697795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.698150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.698179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.307 [2024-10-12 22:26:01.698528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.698557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.698792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:43.307 [2024-10-12 22:26:01.698820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.307 [2024-10-12 22:26:01.699231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.699261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.307 [2024-10-12 22:26:01.699488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.699515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.699854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.699882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.700210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.700239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.700577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.700606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.700928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.700956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.701277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.701305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.701640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.701669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.702014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.702042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.702195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.702227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.702572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.702601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.702967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.702996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.703360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.703390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.703725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.307 [2024-10-12 22:26:01.703753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.307 qpair failed and we were unable to recover it. 00:37:43.307 [2024-10-12 22:26:01.704063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.704091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.704311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.704340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.704555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.704583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.704948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.704975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.705315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.705345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.705566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.705594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.705790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.705817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.706178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.706209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.706563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.706591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.706934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.706963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.707321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.707350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.707676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.707705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.707893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.707922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.708160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.708187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.708444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.708471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.708713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.708744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.709085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.709800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.710309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.710344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.710702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.710732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.711080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.711130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.711517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.711558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.711871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.711900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.712139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.712168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.712255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.712282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.712638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.712666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.712973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.713001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.713344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.713372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.713726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.713755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.714017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 Malloc0 00:37:43.308 [2024-10-12 22:26:01.714044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.714416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.714446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 [2024-10-12 22:26:01.714648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.714676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.308 [2024-10-12 22:26:01.714905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.714933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:43.308 [2024-10-12 22:26:01.715279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.308 [2024-10-12 22:26:01.715308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.308 qpair failed and we were unable to recover it. 00:37:43.308 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.309 [2024-10-12 22:26:01.715622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.715651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.309 [2024-10-12 22:26:01.716005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.716033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.716294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.716325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.716671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.716699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.717009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.717037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.717240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.717270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.717700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.717929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.717957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.718269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.718299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.718495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.718523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.718830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.718859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.719165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.719194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.719399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.719427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.719770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.719799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.720147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.720177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.720497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.720526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.720833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.720862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.721169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.721198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.721453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.309 [2024-10-12 22:26:01.721543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.721571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.721939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.721968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.722200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.722228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.722584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.722613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.722970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.722998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.723376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.723404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.723754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.723783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.724127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.724162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.724488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.724517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.724913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.724942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.725304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.725333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.725692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.725721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.726049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.726078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.726430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.726458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.726678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.726705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.726935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.726962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.727314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.727344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.309 qpair failed and we were unable to recover it. 00:37:43.309 [2024-10-12 22:26:01.727551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.309 [2024-10-12 22:26:01.727578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.727705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.727731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.728112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.728141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.728492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.728519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.728846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.728875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.729210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.729239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.729565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.729593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.729908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.729937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.730256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.730285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.310 [2024-10-12 22:26:01.730669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.730697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:43.310 [2024-10-12 22:26:01.730917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.730944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.310 [2024-10-12 22:26:01.731256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.731285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.310 [2024-10-12 22:26:01.731641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.731670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.731905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.731933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.732143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.732175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.732525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.732554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.732773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.732801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.733147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.733177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.733427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.733454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.733793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.733822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.734137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.734165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.734456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.734485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.734820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.734848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.735220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.735248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.735445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.735474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.735824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.735851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.736212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.736241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.736593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.736621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.736965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.736998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.737360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.737389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.737737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.737766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.737986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.738013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.738346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.738375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.310 qpair failed and we were unable to recover it. 00:37:43.310 [2024-10-12 22:26:01.738588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.310 [2024-10-12 22:26:01.738615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.738947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.738975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.739169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.739197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.739564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.739592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.739919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.739948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.740183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.740212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.740554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.740582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.740931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.740960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.741300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.741329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.741672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.741700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.741922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.741949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.742296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.742325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.311 [2024-10-12 22:26:01.742664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.742692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:43.311 [2024-10-12 22:26:01.743031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.743059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.743277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.311 [2024-10-12 22:26:01.743305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.311 [2024-10-12 22:26:01.743667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.743695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.744011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.744040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.744402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.744433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.744777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.744805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.745144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.745173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.745519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.745548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.745739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.745766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.746095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.746132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.746485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.746513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.746743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.746771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.747117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.747145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.747461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.747488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.747842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.747869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.748215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.748245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.748447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.748475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.748617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.748644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.748972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.749001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.749328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.749357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.749703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.749731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.750067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.750096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.750335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.750367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.750673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.750702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.751047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.751075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.311 [2024-10-12 22:26:01.751420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.311 [2024-10-12 22:26:01.751448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.311 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.751703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.751731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.752069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.752097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.752432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.752460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.752806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.752835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.753179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.753208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.753411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.753438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.753804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.753832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.754185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.754214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.312 [2024-10-12 22:26:01.754578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.754607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.754822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.754849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:43.312 [2024-10-12 22:26:01.755222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.755252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.312 [2024-10-12 22:26:01.755456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.755483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.312 [2024-10-12 22:26:01.755704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.755735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.756062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.756092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.756435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.756464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.756821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.756850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.757076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.757114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.757311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.757339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.757543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.757571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.757921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.757956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.758236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.758265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.758605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.758633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.758905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.758933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.759269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.759297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.759499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.759528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.759625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.759653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.759975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.760004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.760371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.760401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.760496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.760524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.760859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.760887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.761236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.761266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.312 [2024-10-12 22:26:01.761628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.312 [2024-10-12 22:26:01.761656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb904000b90 with addr=10.0.0.2, port=4420 00:37:43.312 qpair failed and we were unable to recover it. 00:37:43.313 [2024-10-12 22:26:01.761738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.574 [2024-10-12 22:26:01.772468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.574 [2024-10-12 22:26:01.772609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.574 [2024-10-12 22:26:01.772657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.574 [2024-10-12 22:26:01.772687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.574 [2024-10-12 22:26:01.772708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.574 [2024-10-12 22:26:01.772768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.574 22:26:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3778321 00:37:43.574 [2024-10-12 22:26:01.782293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.574 [2024-10-12 22:26:01.782391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.574 [2024-10-12 22:26:01.782418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.574 [2024-10-12 22:26:01.782432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.574 [2024-10-12 22:26:01.782446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.574 [2024-10-12 22:26:01.782474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.574 [2024-10-12 22:26:01.792328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.574 [2024-10-12 22:26:01.792388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.574 [2024-10-12 22:26:01.792407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.574 [2024-10-12 22:26:01.792417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.574 [2024-10-12 22:26:01.792427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.574 [2024-10-12 22:26:01.792446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.574 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.802258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.802317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.802331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.802338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.802348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.802362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.812300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.812361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.812375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.812382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.812388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.812402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.822304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.822355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.822369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.822376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.822383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.822396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.832364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.832456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.832470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.832476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.832483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.832498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.842355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.842411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.842424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.842431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.842438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.842452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.852512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.852580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.852593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.852600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.852607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.852620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.862424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.862472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.862485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.862492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.862498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.862512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.872487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.872535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.872549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.872556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.872562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.872576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.882520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.882589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.882603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.882610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.882616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.882630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.892508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.892557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.892571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.892581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.892588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.892602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.902492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.902533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.902547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.902553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.902560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.902573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.912574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.912623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.912636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.912643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.912649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.912663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.922606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.922659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.922673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.922679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.922686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.922699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.932652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.575 [2024-10-12 22:26:01.932710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.575 [2024-10-12 22:26:01.932724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.575 [2024-10-12 22:26:01.932731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.575 [2024-10-12 22:26:01.932738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.575 [2024-10-12 22:26:01.932757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.575 qpair failed and we were unable to recover it. 00:37:43.575 [2024-10-12 22:26:01.942615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.942664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.942678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.942685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.942692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.942706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:01.952557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.952615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.952628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.952635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.952641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.952655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:01.962712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.962765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.962778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.962785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.962791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.962805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:01.972723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.972780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.972794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.972801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.972807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.972821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:01.982717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.982762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.982775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.982785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.982792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.982806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:01.992665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:01.992752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:01.992768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:01.992775] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:01.992781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:01.992801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.002925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.003014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.003028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.003035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.003041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.003055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.012909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.012998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.013012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.013018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.013025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.013040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.022867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.022915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.022929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.022935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.022942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.022956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.032940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.033005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.033019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.033025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.033032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.033046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.042814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.042906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.042921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.042929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.042935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.042949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.576 [2024-10-12 22:26:02.052958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.576 [2024-10-12 22:26:02.053010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.576 [2024-10-12 22:26:02.053024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.576 [2024-10-12 22:26:02.053031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.576 [2024-10-12 22:26:02.053038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.576 [2024-10-12 22:26:02.053052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.576 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.062933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.062979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.062993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.063000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.063007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.063020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.072997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.073054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.073072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.073079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.073085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.073099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.083040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.083098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.083116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.083122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.083129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.083143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.093077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.093136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.093150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.093156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.093163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.093177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.103031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.103093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.103110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.103117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.103123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.103137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.113092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.113147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.113161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.113168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.113174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.113192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.123127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.123185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.123199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.123206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.123212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.123226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.133061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.133154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.133168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.133175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.133182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.133197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.143160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.143210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.143224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.143231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.143238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.143252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.153246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.153291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.153304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.153311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.153317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.153331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.163201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.163255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.163272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.163279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.163285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.163299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.838 qpair failed and we were unable to recover it. 00:37:43.838 [2024-10-12 22:26:02.173318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.838 [2024-10-12 22:26:02.173376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.838 [2024-10-12 22:26:02.173389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.838 [2024-10-12 22:26:02.173396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.838 [2024-10-12 22:26:02.173402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.838 [2024-10-12 22:26:02.173416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.183264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.183328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.183343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.183349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.183357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.183375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.193374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.193435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.193449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.193456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.193463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.193477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.203283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.203340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.203354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.203361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.203367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.203384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.213309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.213363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.213377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.213384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.213390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.213409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.223374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.223433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.223447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.223453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.223460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.223474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.233458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.233510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.233523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.233530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.233537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.233551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.243495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.243547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.243561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.243567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.243574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.243587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.253539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.253589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.253609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.253615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.253622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.253635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.263527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.263572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.263585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.263592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.263598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.263612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.273580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.273675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.273688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.273695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.273701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.273716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.283612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.283669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.283682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.283689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.283695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.283709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.293624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.293682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.293695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.293702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.293711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.293725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.303587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.303636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.303649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.303656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.303663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.303677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.313667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.313716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.839 [2024-10-12 22:26:02.313729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.839 [2024-10-12 22:26:02.313736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.839 [2024-10-12 22:26:02.313742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.839 [2024-10-12 22:26:02.313756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.839 qpair failed and we were unable to recover it. 00:37:43.839 [2024-10-12 22:26:02.323720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:43.839 [2024-10-12 22:26:02.323770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:43.840 [2024-10-12 22:26:02.323784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:43.840 [2024-10-12 22:26:02.323790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:43.840 [2024-10-12 22:26:02.323797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:43.840 [2024-10-12 22:26:02.323810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:43.840 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.333759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.333863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.333876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.333883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.333890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.333903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.343622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.343674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.343687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.343694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.343701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.343714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.353810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.353862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.353875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.353882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.353888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.353902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.363842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.363930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.363944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.363950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.363957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.363970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.373863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.373916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.373930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.373936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.373943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.373957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.383854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.383899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.383913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.383919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.383929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.383943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.393804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.393863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.393876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.393883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.393889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.393903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.403973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.404028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.404041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.404048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.404054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.404068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.413993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.414050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.414063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.414070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.414076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.414090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.423957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.423999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.424012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.424019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.424025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.424038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.434042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.102 [2024-10-12 22:26:02.434109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.102 [2024-10-12 22:26:02.434125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.102 [2024-10-12 22:26:02.434132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.102 [2024-10-12 22:26:02.434142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.102 [2024-10-12 22:26:02.434158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.102 qpair failed and we were unable to recover it. 00:37:44.102 [2024-10-12 22:26:02.443955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.444010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.444025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.444032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.444039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.444053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.454107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.454163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.454177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.454184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.454191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.454205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.464066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.464116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.464130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.464137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.464143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.464157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.474120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.474169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.474183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.474194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.474200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.474215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.484175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.484265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.484279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.484286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.484292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.484306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.494219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.494307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.494321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.494328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.494334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.494348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.504183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.504235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.504248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.504255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.504261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.504275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.514275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.514330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.514344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.514351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.514357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.514371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.524283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.524338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.524351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.524358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.524364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.524378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.534331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.534384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.534397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.534403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.534410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.534423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.544274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.544325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.544339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.544345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.544352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.544366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.554347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.554394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.554407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.554414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.554420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.554434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.564394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.564456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.564473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.564479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.564486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.103 [2024-10-12 22:26:02.564500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.103 qpair failed and we were unable to recover it. 00:37:44.103 [2024-10-12 22:26:02.574413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.103 [2024-10-12 22:26:02.574492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.103 [2024-10-12 22:26:02.574505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.103 [2024-10-12 22:26:02.574512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.103 [2024-10-12 22:26:02.574518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.104 [2024-10-12 22:26:02.574531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.104 qpair failed and we were unable to recover it. 00:37:44.104 [2024-10-12 22:26:02.584421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.104 [2024-10-12 22:26:02.584466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.104 [2024-10-12 22:26:02.584479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.104 [2024-10-12 22:26:02.584485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.104 [2024-10-12 22:26:02.584492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.104 [2024-10-12 22:26:02.584506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.104 qpair failed and we were unable to recover it. 00:37:44.365 [2024-10-12 22:26:02.594354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.365 [2024-10-12 22:26:02.594407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.365 [2024-10-12 22:26:02.594420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.365 [2024-10-12 22:26:02.594427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.365 [2024-10-12 22:26:02.594433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.365 [2024-10-12 22:26:02.594447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.365 qpair failed and we were unable to recover it. 00:37:44.365 [2024-10-12 22:26:02.604381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.365 [2024-10-12 22:26:02.604442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.365 [2024-10-12 22:26:02.604455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.365 [2024-10-12 22:26:02.604462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.365 [2024-10-12 22:26:02.604468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.365 [2024-10-12 22:26:02.604482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.365 qpair failed and we were unable to recover it. 00:37:44.365 [2024-10-12 22:26:02.614541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.365 [2024-10-12 22:26:02.614600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.365 [2024-10-12 22:26:02.614613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.365 [2024-10-12 22:26:02.614619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.365 [2024-10-12 22:26:02.614626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.614639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.624518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.624578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.624592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.624598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.624604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.624618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.634581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.634628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.634641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.634648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.634654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.634667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.644591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.644647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.644660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.644666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.644673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.644687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.654628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.654707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.654723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.654730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.654736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.654750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.664629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.664672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.664685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.664692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.664698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.664712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.674663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.674717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.674731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.674738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.674744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.674758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.684728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.684785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.684799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.684806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.684812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.684829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.694769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.694857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.694872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.694878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.694885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.694904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.704755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.704808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.704822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.704828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.704835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.704849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.714809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.714859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.714872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.714879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.714885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.714899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.724851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.724907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.724920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.724927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.724933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.724947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.734755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.734848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.734861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.734869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.734875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.734889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.744856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.744908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.744924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.366 [2024-10-12 22:26:02.744931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.366 [2024-10-12 22:26:02.744937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.366 [2024-10-12 22:26:02.744951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.366 qpair failed and we were unable to recover it. 00:37:44.366 [2024-10-12 22:26:02.754905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.366 [2024-10-12 22:26:02.754956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.366 [2024-10-12 22:26:02.754970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.754977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.754983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.754997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.764958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.765013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.765026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.765033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.765040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.765053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.774984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.775066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.775079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.775086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.775092] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.775109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.784972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.785017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.785030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.785037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.785047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.785061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.794908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.794964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.794977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.794984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.794990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.795003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.805079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.805163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.805177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.805184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.805191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.805205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.815095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.815153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.815167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.815173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.815180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.815194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.825092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.825147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.825161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.825167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.825174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.825188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.835138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.835212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.835225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.835232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.835238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.835252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.367 [2024-10-12 22:26:02.845167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.367 [2024-10-12 22:26:02.845266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.367 [2024-10-12 22:26:02.845279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.367 [2024-10-12 22:26:02.845286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.367 [2024-10-12 22:26:02.845293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.367 [2024-10-12 22:26:02.845306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.367 qpair failed and we were unable to recover it. 00:37:44.629 [2024-10-12 22:26:02.855201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.629 [2024-10-12 22:26:02.855292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.629 [2024-10-12 22:26:02.855305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.629 [2024-10-12 22:26:02.855312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.629 [2024-10-12 22:26:02.855319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.629 [2024-10-12 22:26:02.855332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.629 qpair failed and we were unable to recover it. 00:37:44.629 [2024-10-12 22:26:02.865045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.629 [2024-10-12 22:26:02.865092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.629 [2024-10-12 22:26:02.865109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.629 [2024-10-12 22:26:02.865116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.629 [2024-10-12 22:26:02.865123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.629 [2024-10-12 22:26:02.865136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.629 qpair failed and we were unable to recover it. 00:37:44.629 [2024-10-12 22:26:02.875217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.629 [2024-10-12 22:26:02.875272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.875286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.875293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.875303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.875317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.885289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.885343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.885356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.885363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.885369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.885383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.895324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.895379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.895392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.895399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.895406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.895419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.905295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.905345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.905358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.905365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.905371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.905384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.915340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.915389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.915402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.915409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.915415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.915428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.925413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.925466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.925480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.925487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.925493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.925507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.935442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.935496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.935509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.935516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.935522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.935536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.945418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.945466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.945479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.945486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.945492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.945506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.955461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.955513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.955526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.955532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.955539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.955552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.965508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.965567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.965580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.965591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.965598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.965612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.975530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.975589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.975603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.975609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.975616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.975630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.985494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.985541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.985554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.985560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.985567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.985580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:02.995566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:02.995622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:02.995635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:02.995642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.630 [2024-10-12 22:26:02.995648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.630 [2024-10-12 22:26:02.995662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.630 qpair failed and we were unable to recover it. 00:37:44.630 [2024-10-12 22:26:03.005607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.630 [2024-10-12 22:26:03.005658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.630 [2024-10-12 22:26:03.005671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.630 [2024-10-12 22:26:03.005678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.005684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.005698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.015642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.015731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.015746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.015753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.015759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.015777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.025674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.025719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.025733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.025740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.025746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.025761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.035653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.035702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.035715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.035722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.035729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.035743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.045717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.045774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.045787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.045794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.045801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.045815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.055764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.055816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.055829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.055840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.055846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.055860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.065745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.065797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.065822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.065830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.065838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.065857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.075804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.075864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.075879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.075887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.075893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.075909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.085829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.085893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.085907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.085913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.085920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.085935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.095881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.095939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.095953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.095960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.095966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.095980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.631 [2024-10-12 22:26:03.105862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.631 [2024-10-12 22:26:03.105934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.631 [2024-10-12 22:26:03.105948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.631 [2024-10-12 22:26:03.105956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.631 [2024-10-12 22:26:03.105963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.631 [2024-10-12 22:26:03.105977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.631 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.115912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.115965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.115979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.115986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.115992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.116006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.125967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.126022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.126037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.126044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.126050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.126065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.135974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.136035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.136048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.136055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.136062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.136076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.145969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.146014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.146031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.146037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.146044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.146058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.156039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.156139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.156153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.156160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.156166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.156180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.166069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.166149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.166162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.166170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.166176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.166190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.176140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.176211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.176225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.176232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.176238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.176253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.186075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.186156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.186171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.186178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.186185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.186205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.196146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.196225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.196239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.196246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.196252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.196266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.206176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.206234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.206247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.206254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.206261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.206275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.216203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.216254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.216268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.216275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.216281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.216295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.226189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.226284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.226297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.894 [2024-10-12 22:26:03.226304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.894 [2024-10-12 22:26:03.226310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.894 [2024-10-12 22:26:03.226324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.894 qpair failed and we were unable to recover it. 00:37:44.894 [2024-10-12 22:26:03.236252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.894 [2024-10-12 22:26:03.236302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.894 [2024-10-12 22:26:03.236319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.236326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.236333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.236346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.246287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.246342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.246355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.246362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.246368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.246382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.256339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.256393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.256406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.256413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.256419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.256433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.266271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.266320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.266334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.266340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.266347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.266361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.276346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.276394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.276408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.276414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.276421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.276439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.286411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.286465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.286478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.286484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.286491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.286504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.296354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.296405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.296418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.296425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.296432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.296445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.306450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.306532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.306545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.306552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.306558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.306572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.316475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.316535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.316548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.316555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.316561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.316575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.326509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.326571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.326584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.326591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.326597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.326611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.336551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.336608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.336621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.336628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.336635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.336648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.346508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.346552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.346565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.346571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.346578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.346591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.356584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.356635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.356648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.356655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.895 [2024-10-12 22:26:03.356662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.895 [2024-10-12 22:26:03.356675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.895 qpair failed and we were unable to recover it. 00:37:44.895 [2024-10-12 22:26:03.366614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.895 [2024-10-12 22:26:03.366668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.895 [2024-10-12 22:26:03.366681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.895 [2024-10-12 22:26:03.366688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.896 [2024-10-12 22:26:03.366698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.896 [2024-10-12 22:26:03.366712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.896 qpair failed and we were unable to recover it. 00:37:44.896 [2024-10-12 22:26:03.376661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:44.896 [2024-10-12 22:26:03.376717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:44.896 [2024-10-12 22:26:03.376731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:44.896 [2024-10-12 22:26:03.376738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:44.896 [2024-10-12 22:26:03.376744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:44.896 [2024-10-12 22:26:03.376758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:44.896 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.386627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.386681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.386694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.386701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.386708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.386721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.396681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.396731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.396745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.396752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.396759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.396772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.406729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.406780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.406793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.406800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.406806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.406820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.416807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.416888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.416902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.416908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.416915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.416928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.426719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.426773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.426798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.426806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.426814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.426833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.436800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.436862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.436887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.436896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.436903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.436923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.446848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.446906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.446921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.446928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.446935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.446950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.456756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.456814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.456828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.456840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.456846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.456860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.466846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.466899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.466913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.466920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.466926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.466941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.476924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.476976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.476990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.476996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.477003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.477017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.486952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.487007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.487022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.487029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.487035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.487050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.497006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.158 [2024-10-12 22:26:03.497061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.158 [2024-10-12 22:26:03.497075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.158 [2024-10-12 22:26:03.497082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.158 [2024-10-12 22:26:03.497088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.158 [2024-10-12 22:26:03.497105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.158 qpair failed and we were unable to recover it. 00:37:45.158 [2024-10-12 22:26:03.506846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.506901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.506916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.506923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.506930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.506945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.517002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.517048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.517064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.517071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.517078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.517093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.527060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.527121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.527135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.527143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.527149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.527164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.537088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.537148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.537162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.537169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.537175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.537190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.547082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.547129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.547143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.547154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.547161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.547176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.557095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.557158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.557172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.557179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.557186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.557200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.567180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.567233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.567246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.567253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.567260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.567274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.577199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.577255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.577269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.577276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.577282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.577296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.587204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.587281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.587294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.587301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.587307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.587322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.597241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.597286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.597299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.597306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.597313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.597327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.607269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.607332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.607345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.607352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.607359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.607372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.617272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.617377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.617391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.617398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.617405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.617419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.627308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.627356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.627369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.627376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.627383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.627397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.159 [2024-10-12 22:26:03.637203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.159 [2024-10-12 22:26:03.637283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.159 [2024-10-12 22:26:03.637300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.159 [2024-10-12 22:26:03.637307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.159 [2024-10-12 22:26:03.637314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.159 [2024-10-12 22:26:03.637328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.159 qpair failed and we were unable to recover it. 00:37:45.421 [2024-10-12 22:26:03.647420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-10-12 22:26:03.647472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-10-12 22:26:03.647485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-10-12 22:26:03.647492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.421 [2024-10-12 22:26:03.647498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.421 [2024-10-12 22:26:03.647512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.421 qpair failed and we were unable to recover it. 00:37:45.421 [2024-10-12 22:26:03.657438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.421 [2024-10-12 22:26:03.657490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.421 [2024-10-12 22:26:03.657504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.421 [2024-10-12 22:26:03.657511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.421 [2024-10-12 22:26:03.657517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.657531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.667404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.667454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.667467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.667474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.667481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.667494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.677433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.677483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.677497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.677504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.677511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.677528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.687498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.687550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.687564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.687571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.687577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.687591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.697538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.697596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.697609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.697616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.697623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.697637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.707507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.707562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.707575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.707582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.707589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.707603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.717530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.717622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.717636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.717644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.717651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.717665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.727602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.727659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.727675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.727682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.727689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.727703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.737662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.737722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.737735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.737743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.737749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.737764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.747624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.747671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.747684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.747692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.747698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.747713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.757534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.757579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.757592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.757600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.757607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.757621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.767763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.767847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.767860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.767867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.767874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.767896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.777745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.777799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.777813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.777820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.777827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.777841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.787720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.787772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.787786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.787794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.787800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.787815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.422 [2024-10-12 22:26:03.797697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.422 [2024-10-12 22:26:03.797741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.422 [2024-10-12 22:26:03.797755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.422 [2024-10-12 22:26:03.797762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.422 [2024-10-12 22:26:03.797769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.422 [2024-10-12 22:26:03.797783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.422 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.807778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.807829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.807852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.807861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.807868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.807887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.817800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.817850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.817867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.817875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.817881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.817896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.827811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.827872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.827897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.827906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.827914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.827934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.837838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.837890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.837915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.837924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.837932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.837951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.847803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.847893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.847909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.847917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.847924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.847939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.857898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.857971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.857986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.857993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.858005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.858020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.867913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.867970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.867984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.867991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.867997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.868011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.877952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.878001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.878014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.878022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.878028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.878042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.887959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.888006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.888019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.888026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.888033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.888047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.423 [2024-10-12 22:26:03.898022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.423 [2024-10-12 22:26:03.898069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.423 [2024-10-12 22:26:03.898082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.423 [2024-10-12 22:26:03.898089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.423 [2024-10-12 22:26:03.898096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.423 [2024-10-12 22:26:03.898114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.423 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.908038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.908101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.908118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.908125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.908132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.908146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.918055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.918099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.918116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.918123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.918129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.918144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.928081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.928136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.928150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.928157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.928164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.928179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.938005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.938054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.938069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.938077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.938084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.938098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.948140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.948184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.948198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.948205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.948216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.948231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.958197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.958239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.958252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.958260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.958266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.958280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.968168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.968216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.968230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.968237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.968243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.968258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.978237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.978292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.978306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.978313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.978320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.978334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.988252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.988296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.988309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.988316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.988323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.988337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:03.998272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:03.998316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:03.998330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:03.998338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:03.998344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:03.998358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.008370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.008425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:04.008439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:04.008446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:04.008452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:04.008466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.018386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.018434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:04.018447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:04.018454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:04.018460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:04.018474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.028397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.028442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:04.028456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:04.028463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:04.028470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:04.028484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.038297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.038342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:04.038355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:04.038367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:04.038374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:04.038388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.048425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.048474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.685 [2024-10-12 22:26:04.048487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.685 [2024-10-12 22:26:04.048494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.685 [2024-10-12 22:26:04.048501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.685 [2024-10-12 22:26:04.048515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.685 qpair failed and we were unable to recover it. 00:37:45.685 [2024-10-12 22:26:04.058453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.685 [2024-10-12 22:26:04.058501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.058514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.058522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.058528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.058542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.068461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.068511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.068524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.068531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.068538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.068552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.078511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.078556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.078569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.078576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.078583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.078596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.088526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.088570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.088584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.088591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.088598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.088612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.098559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.098622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.098636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.098643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.098649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.098664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.108445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.108490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.108503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.108510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.108516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.108531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.118616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.118671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.118684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.118692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.118698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.118712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.128620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.128697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.128715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.128724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.128730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.128745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.138563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.138613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.138627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.138635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.138642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.138656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.148698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.148742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.148756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.148763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.148770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.148785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.158743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.158789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.158803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.158810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.158816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.158830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.686 [2024-10-12 22:26:04.168762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.686 [2024-10-12 22:26:04.168807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.686 [2024-10-12 22:26:04.168820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.686 [2024-10-12 22:26:04.168827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.686 [2024-10-12 22:26:04.168834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.686 [2024-10-12 22:26:04.168848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.686 qpair failed and we were unable to recover it. 00:37:45.948 [2024-10-12 22:26:04.178807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-10-12 22:26:04.178858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-10-12 22:26:04.178872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-10-12 22:26:04.178879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-10-12 22:26:04.178886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.948 [2024-10-12 22:26:04.178900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-10-12 22:26:04.188798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-10-12 22:26:04.188849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-10-12 22:26:04.188862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-10-12 22:26:04.188869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-10-12 22:26:04.188875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.948 [2024-10-12 22:26:04.188889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-10-12 22:26:04.198847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-10-12 22:26:04.198892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-10-12 22:26:04.198906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-10-12 22:26:04.198913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-10-12 22:26:04.198920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.948 [2024-10-12 22:26:04.198934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-10-12 22:26:04.208860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.948 [2024-10-12 22:26:04.208905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.948 [2024-10-12 22:26:04.208918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.948 [2024-10-12 22:26:04.208925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.948 [2024-10-12 22:26:04.208931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.948 [2024-10-12 22:26:04.208946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.948 qpair failed and we were unable to recover it. 00:37:45.948 [2024-10-12 22:26:04.218900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.218950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.218967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.218974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.218980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.218995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.228912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.228959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.228974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.228981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.228988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.229002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.238935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.238986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.239000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.239007] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.239014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.239028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.248954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.249006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.249019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.249026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.249033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.249047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.258915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.258963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.258977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.258984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.258990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.259008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.269021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.269062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.269076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.269083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.269090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.269107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.279037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.279079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.279093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.279101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.279112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.279126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.288944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.288991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.289005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.289013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.289020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.289034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.299072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.299127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.299141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.299148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.299155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.299169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.309127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.309178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.309195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.309203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.309210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.309224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.319157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.319200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.319214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.319221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.319228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.319242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.329197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.329244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.329257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.329264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.329271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.329285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.339219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.949 [2024-10-12 22:26:04.339284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.949 [2024-10-12 22:26:04.339299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.949 [2024-10-12 22:26:04.339306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.949 [2024-10-12 22:26:04.339314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.949 [2024-10-12 22:26:04.339330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.949 qpair failed and we were unable to recover it. 00:37:45.949 [2024-10-12 22:26:04.349144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.349189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.349204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.349211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.349221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.349235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.359244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.359291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.359305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.359313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.359320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.359334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.369287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.369335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.369348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.369356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.369362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.369377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.379395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.379442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.379456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.379463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.379470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.379484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.389354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.389419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.389432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.389439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.389445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.389460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.399377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.399425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.399438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.399445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.399452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.399466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.409413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.409462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.409475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.409483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.409489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.409503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.419444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.419502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.419515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.419522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.419529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.419543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:45.950 [2024-10-12 22:26:04.429483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:45.950 [2024-10-12 22:26:04.429530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:45.950 [2024-10-12 22:26:04.429543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:45.950 [2024-10-12 22:26:04.429550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:45.950 [2024-10-12 22:26:04.429557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:45.950 [2024-10-12 22:26:04.429571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:45.950 qpair failed and we were unable to recover it. 00:37:46.212 [2024-10-12 22:26:04.439500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.212 [2024-10-12 22:26:04.439596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.212 [2024-10-12 22:26:04.439610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.212 [2024-10-12 22:26:04.439617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.212 [2024-10-12 22:26:04.439628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.212 [2024-10-12 22:26:04.439643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-10-12 22:26:04.449549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.212 [2024-10-12 22:26:04.449604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.212 [2024-10-12 22:26:04.449617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.212 [2024-10-12 22:26:04.449625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.212 [2024-10-12 22:26:04.449631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.212 [2024-10-12 22:26:04.449645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.212 qpair failed and we were unable to recover it. 00:37:46.212 [2024-10-12 22:26:04.459604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.459691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.459704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.459712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.459719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.459733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.469552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.469595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.469609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.469616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.469623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.469637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.479595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.479638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.479652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.479659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.479666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.479680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.489635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.489681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.489698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.489705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.489712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.489730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.499651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.499697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.499712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.499719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.499726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.499740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.509686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.509729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.509742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.509749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.509756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.509770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.519722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.519766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.519780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.519788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.519795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.519809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.529757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.529803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.529816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.529831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.529838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.529852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.539692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.539739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.539752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.539759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.539766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.539780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.549793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.549837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.549850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.549857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.549864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.549878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.559812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.559861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.559886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.559895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.559902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.559922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.569849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.569914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.569939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.569948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.569956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.569975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.579884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.213 [2024-10-12 22:26:04.579937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.213 [2024-10-12 22:26:04.579952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.213 [2024-10-12 22:26:04.579960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.213 [2024-10-12 22:26:04.579967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.213 [2024-10-12 22:26:04.579983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.213 qpair failed and we were unable to recover it. 00:37:46.213 [2024-10-12 22:26:04.589893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.589942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.589956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.589963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.589970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.589984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.599920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.599982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.599996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.600003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.600010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.600025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.609966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.610019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.610032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.610039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.610046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.610060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.619869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.619915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.619929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.619940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.619947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.619961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.629987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.630028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.630042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.630049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.630056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.630071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.640038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.640126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.640141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.640148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.640156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.640170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.650072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.650122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.650136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.650143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.650150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.650164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.660110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.660198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.660211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.660218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.660226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.660240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.670120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.670163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.670176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.670184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.670190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.670204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.680156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.680198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.680211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.680218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.680225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.680239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.214 [2024-10-12 22:26:04.690183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.214 [2024-10-12 22:26:04.690279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.214 [2024-10-12 22:26:04.690293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.214 [2024-10-12 22:26:04.690300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.214 [2024-10-12 22:26:04.690307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.214 [2024-10-12 22:26:04.690321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.214 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.700200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.700258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.700271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.700278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.700285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.700299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.710235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.710278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.710294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.710301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.710308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.710322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.720257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.720301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.720314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.720321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.720327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.720342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.730290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.730337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.730350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.730358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.730364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.730378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.740350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.740406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.740419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.740427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.740433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.740447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.750314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.750359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.750372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.750379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.750386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.750404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.760348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.760390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.760403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.760411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.760418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.760431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.770408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.770456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.770470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.770477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.770484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.770497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.780312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.780361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.780374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.780382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.780389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.477 [2024-10-12 22:26:04.780403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.477 qpair failed and we were unable to recover it. 00:37:46.477 [2024-10-12 22:26:04.790407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.477 [2024-10-12 22:26:04.790452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.477 [2024-10-12 22:26:04.790466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.477 [2024-10-12 22:26:04.790473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.477 [2024-10-12 22:26:04.790480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.790495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.800478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.800521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.800537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.800545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.800551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.800565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.810379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.810429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.810443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.810450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.810457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.810476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.820553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.820601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.820615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.820622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.820629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.820643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.830551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.830596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.830609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.830617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.830623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.830637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.840556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.840602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.840615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.840622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.840629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.840647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.850613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.850664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.850678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.850685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.850692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.850706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.860647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.860714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.860727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.860734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.860741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.860755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.870658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.870717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.870731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.870738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.870745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.870759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.880595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.880684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.880699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.880706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.880714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.880728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.890597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.890648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.890662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.890669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.890676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.890690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.900760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.900811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.900824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.900831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.900838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.900852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.910744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.910788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.910801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.910809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.910815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.910829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.920800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.920844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.920858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.920865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.920872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.478 [2024-10-12 22:26:04.920886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.478 qpair failed and we were unable to recover it. 00:37:46.478 [2024-10-12 22:26:04.930813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.478 [2024-10-12 22:26:04.930863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.478 [2024-10-12 22:26:04.930876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.478 [2024-10-12 22:26:04.930883] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.478 [2024-10-12 22:26:04.930894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.479 [2024-10-12 22:26:04.930908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-10-12 22:26:04.940742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.479 [2024-10-12 22:26:04.940791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.479 [2024-10-12 22:26:04.940805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.479 [2024-10-12 22:26:04.940812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.479 [2024-10-12 22:26:04.940818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.479 [2024-10-12 22:26:04.940832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-10-12 22:26:04.950873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.479 [2024-10-12 22:26:04.950917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.479 [2024-10-12 22:26:04.950931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.479 [2024-10-12 22:26:04.950939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.479 [2024-10-12 22:26:04.950945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.479 [2024-10-12 22:26:04.950959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.479 [2024-10-12 22:26:04.960912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.479 [2024-10-12 22:26:04.960956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.479 [2024-10-12 22:26:04.960969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.479 [2024-10-12 22:26:04.960977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.479 [2024-10-12 22:26:04.960983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.479 [2024-10-12 22:26:04.960997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.479 qpair failed and we were unable to recover it. 00:37:46.741 [2024-10-12 22:26:04.970950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:04.970998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:04.971012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:04.971019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:04.971026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:04.971040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:04.980867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:04.980932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:04.980947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:04.980954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:04.980961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:04.980976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:04.990980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:04.991021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:04.991035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:04.991042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:04.991049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:04.991063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.001022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.001065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.001078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.001086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.001093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.001111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.011043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.011091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.011108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.011116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.011123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.011137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.021076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.021128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.021141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.021152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.021159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.021173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.031083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.031139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.031152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.031160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.031166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.031181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.041115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.041201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.041215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.041223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.041230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.041244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.051146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.051241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.051254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.051262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.051268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.051283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.061183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.061278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.061292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.061299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.061306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.061321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.071157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.071202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.071215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.071223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.071230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.071244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.081161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.081211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.081225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.081232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.081239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.081253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.091247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.091293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.742 [2024-10-12 22:26:05.091307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.742 [2024-10-12 22:26:05.091315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.742 [2024-10-12 22:26:05.091322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.742 [2024-10-12 22:26:05.091337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.742 qpair failed and we were unable to recover it. 00:37:46.742 [2024-10-12 22:26:05.101278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.742 [2024-10-12 22:26:05.101323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.101337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.101345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.101351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.101365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.111304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.111348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.111362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.111372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.111380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.111393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.121346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.121396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.121410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.121417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.121424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.121438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.131380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.131475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.131489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.131497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.131503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.131518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.141421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.141471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.141485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.141492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.141498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.141512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.151419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.151465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.151478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.151485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.151491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.151505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.161428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.161488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.161502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.161509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.161516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.161530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.171469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.171517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.171530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.171537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.171544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.171558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.181517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.181564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.181578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.181585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.181592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.181606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.191472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.191521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.191534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.191542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.191548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.191562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.201551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.201600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.201617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.201624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.201630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.201644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.211583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.211672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.211685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.211693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.211699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.211713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:46.743 [2024-10-12 22:26:05.221608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:46.743 [2024-10-12 22:26:05.221674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:46.743 [2024-10-12 22:26:05.221687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:46.743 [2024-10-12 22:26:05.221695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:46.743 [2024-10-12 22:26:05.221701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:46.743 [2024-10-12 22:26:05.221715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:46.743 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.231609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.231655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.231669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.231676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.231684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.231698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.241628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.241670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.241684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.241691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.241698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.241716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.251569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.251615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.251629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.251636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.251643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.251663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.261720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.261769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.261782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.261789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.261796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.261810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.271741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.271783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.271797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.271804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.271810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.271824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.281768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.281813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.281826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.281834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.281840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.281854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.291823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.291885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.291905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.291912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.291919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.291933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.301836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.301895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.301908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.301916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.301922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.301936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.311848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.311900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.311913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.311921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.311928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.311942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.006 [2024-10-12 22:26:05.321873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.006 [2024-10-12 22:26:05.321958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.006 [2024-10-12 22:26:05.321972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.006 [2024-10-12 22:26:05.321979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.006 [2024-10-12 22:26:05.321986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.006 [2024-10-12 22:26:05.322000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.006 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.331920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.331971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.331984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.331992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.331998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.332016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.341943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.341989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.342003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.342010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.342017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.342031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.351963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.352006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.352019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.352027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.352034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.352048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.361984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.362028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.362042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.362049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.362056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.362070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.372019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.372066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.372079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.372087] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.372093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.372112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.381937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.381992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.382011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.382020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.382028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.382044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.392110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.392181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.392196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.392203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.392210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.392226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.402105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.402150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.402164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.402171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.402178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.402192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.412004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.412050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.412064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.412072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.412078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.412093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.422152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.422200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.422214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.422221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.422231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.422246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.432195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.432240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.432254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.432261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.432268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.432282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.442193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.442238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.442252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.442259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.442266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.442279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.452248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.452314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.452328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.452335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.452341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.452356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.462254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.462304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.007 [2024-10-12 22:26:05.462318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.007 [2024-10-12 22:26:05.462325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.007 [2024-10-12 22:26:05.462332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.007 [2024-10-12 22:26:05.462346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.007 qpair failed and we were unable to recover it. 00:37:47.007 [2024-10-12 22:26:05.472160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.007 [2024-10-12 22:26:05.472212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.008 [2024-10-12 22:26:05.472226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.008 [2024-10-12 22:26:05.472233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.008 [2024-10-12 22:26:05.472240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.008 [2024-10-12 22:26:05.472253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.008 qpair failed and we were unable to recover it. 00:37:47.008 [2024-10-12 22:26:05.482284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.008 [2024-10-12 22:26:05.482331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.008 [2024-10-12 22:26:05.482344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.008 [2024-10-12 22:26:05.482351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.008 [2024-10-12 22:26:05.482358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.008 [2024-10-12 22:26:05.482372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.008 qpair failed and we were unable to recover it. 00:37:47.269 [2024-10-12 22:26:05.492371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.269 [2024-10-12 22:26:05.492422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.269 [2024-10-12 22:26:05.492436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.269 [2024-10-12 22:26:05.492444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.269 [2024-10-12 22:26:05.492451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.492465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.502459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.502515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.502529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.502536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.502543] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.502557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.512407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.512461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.512475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.512482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.512493] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.512507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.522402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.522483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.522497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.522505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.522511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.522526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.532446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.532497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.532512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.532519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.532526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.532543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.542506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.542556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.542570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.542578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.542584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.542599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.552520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.552562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.552576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.552583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.552589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.552603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.562513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.562561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.562575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.562582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.562590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.562604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.572557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.572603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.572616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.572623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.572630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.572645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.582592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.582672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.582685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.582692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.582699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.582713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.592589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.592640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.592654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.592661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.592668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.592682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.602614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.602660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.602674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.602685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.602692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.602706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.612656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.612699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.612713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.270 [2024-10-12 22:26:05.612720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.270 [2024-10-12 22:26:05.612727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.270 [2024-10-12 22:26:05.612741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.270 qpair failed and we were unable to recover it. 00:37:47.270 [2024-10-12 22:26:05.622673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.270 [2024-10-12 22:26:05.622725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.270 [2024-10-12 22:26:05.622738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.622745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.622752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.622765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.632701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.632744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.632758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.632765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.632771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.632785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.642724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.642770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.642784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.642791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.642798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.642812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.652731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.652775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.652789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.652796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.652803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.652817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.662776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.662827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.662852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.662861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.662868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.662888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.672825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.672873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.672888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.672896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.672903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.672918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.682852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.682949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.682974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.682984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.682991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.683012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.692880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.692927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.692947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.692955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.692962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.692978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.702899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.702952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.702968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.702976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.702983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.703001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.712920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.712966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.712981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.712988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.712996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.713011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.722939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.722982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.722996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.723003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.723010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.723024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.732985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.733039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.733052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.733059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.733066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.733080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.743029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.743076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.743090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.743097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.743108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.743123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.271 [2024-10-12 22:26:05.753023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.271 [2024-10-12 22:26:05.753068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.271 [2024-10-12 22:26:05.753081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.271 [2024-10-12 22:26:05.753089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.271 [2024-10-12 22:26:05.753096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.271 [2024-10-12 22:26:05.753115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.271 qpair failed and we were unable to recover it. 00:37:47.533 [2024-10-12 22:26:05.763049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.533 [2024-10-12 22:26:05.763091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.533 [2024-10-12 22:26:05.763108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.533 [2024-10-12 22:26:05.763115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.533 [2024-10-12 22:26:05.763122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.533 [2024-10-12 22:26:05.763136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.533 qpair failed and we were unable to recover it. 00:37:47.533 [2024-10-12 22:26:05.773088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.533 [2024-10-12 22:26:05.773137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.533 [2024-10-12 22:26:05.773151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.533 [2024-10-12 22:26:05.773158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.533 [2024-10-12 22:26:05.773164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.533 [2024-10-12 22:26:05.773178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.533 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.783119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.783165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.783183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.783190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.783196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.783211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.793022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.793083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.793096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.793107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.793114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.793128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.803163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.803210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.803223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.803230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.803237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.803252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.813185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.813239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.813253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.813261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.813268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.813282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.823207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.823257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.823271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.823278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.823285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.823302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.833232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.833301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.833314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.833322] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.833329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.833343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.843273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.843314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.843328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.843335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.843342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.843356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.853297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.853343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.853357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.853364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.853371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.853385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.863343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.863391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.863406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.863414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.863420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.863435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.873348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.873399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.873416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.873423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.873430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.873444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.883390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.883459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.883472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.883480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.883486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.883500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.893418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.893474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.893487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.893494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.893501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.893515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.534 qpair failed and we were unable to recover it. 00:37:47.534 [2024-10-12 22:26:05.903434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.534 [2024-10-12 22:26:05.903488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.534 [2024-10-12 22:26:05.903501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.534 [2024-10-12 22:26:05.903509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.534 [2024-10-12 22:26:05.903515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.534 [2024-10-12 22:26:05.903529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.913449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.913491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.913504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.913511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.913521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.913535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.923476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.923523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.923536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.923544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.923550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.923564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.933403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.933451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.933465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.933472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.933478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.933492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.943421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.943470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.943484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.943492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.943498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.943518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.953570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.953610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.953624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.953631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.953637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.953652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.963588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.963636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.963651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.963658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.963665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.963678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.973624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.973676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.973690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.973697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.973703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.973717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.983653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.983705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.983718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.983725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.983732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.983746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:05.993663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:05.993710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:05.993723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:05.993730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:05.993737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:05.993751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:06.003706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:06.003748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:06.003761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:06.003769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:06.003778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:06.003793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.535 [2024-10-12 22:26:06.013739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.535 [2024-10-12 22:26:06.013784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.535 [2024-10-12 22:26:06.013798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.535 [2024-10-12 22:26:06.013806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.535 [2024-10-12 22:26:06.013812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.535 [2024-10-12 22:26:06.013826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.535 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.023766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.023817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.023830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.023838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.023844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.023858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.033777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.033821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.033835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.033842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.033848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.033862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.043785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.043830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.043843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.043851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.043857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.043871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.053842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.053889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.053903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.053910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.053917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.053931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.063878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.063927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.063941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.063948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.063955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.063969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.073903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.073944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.073957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.073964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.073971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.073985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.083923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.084014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.084029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.084036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.084043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.084058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.093831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.093875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.093889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.093900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.093907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.093922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.103978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.104028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.104043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.104050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.104057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.104075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.114009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.114052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.114066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.114073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.114079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.114094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.124022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.124064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.124077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.124084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.124091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.124108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.134064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.134113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.134127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.134134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.134141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.134155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.144092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.144182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.144196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.144204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.144211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.144225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.154105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.154168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.154182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.154189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.154196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.154210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.164133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.164179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.164192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.164200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.164206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.164220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.174119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.174167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.174180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.174187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.174194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.174208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.184187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.184263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.184276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.184286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.184294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.184308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.194194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.194242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.194255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.194263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.194269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.194284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.204239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.204288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.204301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.204308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.204315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.204329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.214135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.214205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.214220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.214227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.214233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.214254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.224175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.224221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.224234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.224241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.224248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.224262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.234321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.234366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.234379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.234387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.234394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.234407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.244330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.244375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.244388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.244395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.244402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.244415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.254249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.254296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.254311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.254318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.254324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.254344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.264461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.264506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.264520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.264527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.264534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.264548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:47.797 [2024-10-12 22:26:06.274438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:47.797 [2024-10-12 22:26:06.274481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:47.797 [2024-10-12 22:26:06.274497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:47.797 [2024-10-12 22:26:06.274504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:47.797 [2024-10-12 22:26:06.274511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:47.797 [2024-10-12 22:26:06.274525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:47.797 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.284410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.284495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.284509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.284517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.284524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.284538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.294495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.294550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.294563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.294570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.294576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.294590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.304508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.304556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.304570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.304577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.304583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.304597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.314526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.314568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.314581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.314588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.314595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.314612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.324569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.324617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.324630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.324637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.324644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.324658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.334456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.334499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.334513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.334520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.334526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.334540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.344633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.344679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.344692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.344699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.344706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.344720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.354630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.354674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.354688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.354695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.058 [2024-10-12 22:26:06.354701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.058 [2024-10-12 22:26:06.354716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.058 qpair failed and we were unable to recover it. 00:37:48.058 [2024-10-12 22:26:06.364657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.058 [2024-10-12 22:26:06.364701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.058 [2024-10-12 22:26:06.364718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.058 [2024-10-12 22:26:06.364725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.364732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.364746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.374694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.374780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.374794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.374801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.374808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.374822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.384724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.384769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.384783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.384790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.384797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.384811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.394733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.394779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.394793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.394800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.394807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.394821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.404772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.404818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.404832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.404840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.404850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.404867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.414807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.414862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.414887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.414896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.414904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.414924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.424833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.424891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.424917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.424926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.424933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.424953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.434822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.434872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.434888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.434896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.434902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.434918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.444870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.444915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.444929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.444936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.444943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.444957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.454915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.454964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.454978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.454986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.454993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.455007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.464935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.464984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.464997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.465005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.465011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.465026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.474950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.475022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.475035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.475042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.475049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.475064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.484970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.485017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.485031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.485039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.485045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.485059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.495018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.495066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.495079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.495086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.495098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.495115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.505063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.505118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.505132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.505139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.505146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.505160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.515074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.515121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.515136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.515143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.515149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.515164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.525094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.525143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.525156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.525164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.525170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.525185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.059 [2024-10-12 22:26:06.535000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.059 [2024-10-12 22:26:06.535047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.059 [2024-10-12 22:26:06.535061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.059 [2024-10-12 22:26:06.535068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.059 [2024-10-12 22:26:06.535075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.059 [2024-10-12 22:26:06.535090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.059 qpair failed and we were unable to recover it. 00:37:48.321 [2024-10-12 22:26:06.545162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.321 [2024-10-12 22:26:06.545210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.321 [2024-10-12 22:26:06.545225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.321 [2024-10-12 22:26:06.545232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.321 [2024-10-12 22:26:06.545239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.321 [2024-10-12 22:26:06.545253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.321 qpair failed and we were unable to recover it. 00:37:48.321 [2024-10-12 22:26:06.555161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.321 [2024-10-12 22:26:06.555209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.321 [2024-10-12 22:26:06.555223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.321 [2024-10-12 22:26:06.555230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.321 [2024-10-12 22:26:06.555236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.321 [2024-10-12 22:26:06.555251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.321 qpair failed and we were unable to recover it. 00:37:48.321 [2024-10-12 22:26:06.565200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.321 [2024-10-12 22:26:06.565241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.321 [2024-10-12 22:26:06.565255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.321 [2024-10-12 22:26:06.565262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.321 [2024-10-12 22:26:06.565269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.321 [2024-10-12 22:26:06.565284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.321 qpair failed and we were unable to recover it. 00:37:48.321 [2024-10-12 22:26:06.575246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.321 [2024-10-12 22:26:06.575292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.575305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.575312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.575319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.575333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.585279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.585331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.585345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.585356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.585362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.585376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.595305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.595349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.595363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.595370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.595377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.595391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.605373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.605418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.605431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.605439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.605446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.605459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.615386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.615459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.615472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.615479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.615485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.615499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.625379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.625437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.625450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.625457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.625464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.625478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.635395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.635438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.635452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.635459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.635465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.635479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.645428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.645472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.645486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.645493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.645499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.645513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.655462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.655508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.655522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.655528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.655535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.655549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.665503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.665551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.665564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.665571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.665578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.665592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.675523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.322 [2024-10-12 22:26:06.675577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.322 [2024-10-12 22:26:06.675591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.322 [2024-10-12 22:26:06.675602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.322 [2024-10-12 22:26:06.675608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.322 [2024-10-12 22:26:06.675623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.322 qpair failed and we were unable to recover it. 00:37:48.322 [2024-10-12 22:26:06.685550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.685598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.685611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.685619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.685626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.685640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.695581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.695662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.695676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.695683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.695691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.695705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.705614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.705665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.705679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.705686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.705693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.705707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.715616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.715664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.715677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.715685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.715691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.715705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.725521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.725568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.725582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.725589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.725595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.725609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.735685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.735731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.735744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.735751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.735758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.735772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.745710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.745757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.745771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.745778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.745785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.745800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.755686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.755734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.755749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.755756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.755762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.755777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.765769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.765818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.765847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.765857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.765865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.765885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.775748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.775812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.775837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.775846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.775854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.775874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.785802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.785852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.785868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.785875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.785882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.785898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.795804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.795850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.795865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.795872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.795879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.795894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.323 [2024-10-12 22:26:06.805851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.323 [2024-10-12 22:26:06.805901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.323 [2024-10-12 22:26:06.805926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.323 [2024-10-12 22:26:06.805935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.323 [2024-10-12 22:26:06.805943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.323 [2024-10-12 22:26:06.805971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.323 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.815870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.815929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.815945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.815952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.815959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.815975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.825925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.825977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.825991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.825998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.826006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.826020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.835933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.835981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.835995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.836003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.836009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.836024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.845960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.846010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.846024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.846031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.846038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.846053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.855994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.856041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.856060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.856068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.856074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.856089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.866011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.866072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.866086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.866093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.866100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.866118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.876032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.876076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.876090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.876097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.876109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.876124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.886073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.886124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.886138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.886145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.886152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.886166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.896126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.896174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.587 [2024-10-12 22:26:06.896188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.587 [2024-10-12 22:26:06.896195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.587 [2024-10-12 22:26:06.896201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.587 [2024-10-12 22:26:06.896221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.587 qpair failed and we were unable to recover it. 00:37:48.587 [2024-10-12 22:26:06.906032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.587 [2024-10-12 22:26:06.906083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.906097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.906110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.906117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.906131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.916200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.916280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.916294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.916301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.916308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.916322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.926149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.926264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.926277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.926285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.926292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.926307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.936197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.936245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.936259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.936266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.936273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.936287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.946251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.946303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.946321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.946328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.946334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.946349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.956177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.956232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.956246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.956253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.956260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.956274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.966352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.966420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.966434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.966441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.966447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.966462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.976331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.976381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.976395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.976402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.976409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.976423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.986349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.986395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.986409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.986416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.986427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb904000b90 00:37:48.588 [2024-10-12 22:26:06.986441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:06.996409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:06.996506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:06.996571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:06.996597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:06.996617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f181f0 00:37:48.588 [2024-10-12 22:26:06.996670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:07.006399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:07.006473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:07.006504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:07.006520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:07.006534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f181f0 00:37:48.588 [2024-10-12 22:26:07.006563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:07.016441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:07.016508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.588 [2024-10-12 22:26:07.016527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.588 [2024-10-12 22:26:07.016538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.588 [2024-10-12 22:26:07.016547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f181f0 00:37:48.588 [2024-10-12 22:26:07.016567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:48.588 qpair failed and we were unable to recover it. 00:37:48.588 [2024-10-12 22:26:07.026451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.588 [2024-10-12 22:26:07.026591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.589 [2024-10-12 22:26:07.026658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.589 [2024-10-12 22:26:07.026685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.589 [2024-10-12 22:26:07.026707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8fc000b90 00:37:48.589 [2024-10-12 22:26:07.026760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:48.589 qpair failed and we were unable to recover it. 00:37:48.589 [2024-10-12 22:26:07.036513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.589 [2024-10-12 22:26:07.036590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.589 [2024-10-12 22:26:07.036623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.589 [2024-10-12 22:26:07.036639] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.589 [2024-10-12 22:26:07.036654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8fc000b90 00:37:48.589 [2024-10-12 22:26:07.036685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:48.589 qpair failed and we were unable to recover it. 00:37:48.589 [2024-10-12 22:26:07.037075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f260b0 is same with the state(6) to be set 00:37:48.589 [2024-10-12 22:26:07.046519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.589 [2024-10-12 22:26:07.046631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.589 [2024-10-12 22:26:07.046697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.589 [2024-10-12 22:26:07.046723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.589 [2024-10-12 22:26:07.046745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8f8000b90 00:37:48.589 [2024-10-12 22:26:07.046798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:48.589 qpair failed and we were unable to recover it. 00:37:48.589 [2024-10-12 22:26:07.056564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:48.589 [2024-10-12 22:26:07.056684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:48.589 [2024-10-12 22:26:07.056749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:48.589 [2024-10-12 22:26:07.056774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:48.589 [2024-10-12 22:26:07.056796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8f8000b90 00:37:48.589 [2024-10-12 22:26:07.056850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:48.589 qpair failed and we were unable to recover it. 00:37:48.589 [2024-10-12 22:26:07.057281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f260b0 (9): Bad file descriptor 00:37:48.589 Initializing NVMe Controllers 00:37:48.589 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:48.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:48.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:48.589 Initialization complete. Launching workers. 00:37:48.589 Starting thread on core 1 00:37:48.589 Starting thread on core 2 00:37:48.589 Starting thread on core 3 00:37:48.589 Starting thread on core 0 00:37:48.589 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:48.589 00:37:48.589 real 0m11.340s 00:37:48.589 user 0m22.276s 00:37:48.589 sys 0m3.713s 00:37:48.589 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:48.589 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:48.589 ************************************ 00:37:48.589 END TEST nvmf_target_disconnect_tc2 00:37:48.589 ************************************ 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.851 rmmod nvme_tcp 00:37:48.851 rmmod nvme_fabrics 00:37:48.851 rmmod nvme_keyring 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@513 -- # '[' -n 3779004 ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # killprocess 3779004 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3779004 ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3779004 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3779004 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3779004' 00:37:48.851 killing process with pid 3779004 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3779004 00:37:48.851 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3779004 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.112 22:26:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:51.025 00:37:51.025 real 0m21.704s 00:37:51.025 user 0m49.737s 00:37:51.025 sys 0m9.914s 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:51.025 ************************************ 00:37:51.025 END TEST nvmf_target_disconnect 00:37:51.025 ************************************ 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:51.025 00:37:51.025 real 7m53.964s 00:37:51.025 user 17m22.970s 00:37:51.025 sys 2m24.782s 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:51.025 22:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:51.025 ************************************ 00:37:51.025 END TEST nvmf_host 00:37:51.025 ************************************ 00:37:51.286 22:26:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:51.286 22:26:09 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:51.286 22:26:09 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:51.286 22:26:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:51.286 22:26:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.286 22:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:51.286 ************************************ 00:37:51.286 START TEST nvmf_target_core_interrupt_mode 00:37:51.286 ************************************ 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:51.286 * Looking for test storage... 00:37:51.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.286 --rc genhtml_branch_coverage=1 00:37:51.286 --rc genhtml_function_coverage=1 00:37:51.286 --rc genhtml_legend=1 00:37:51.286 --rc geninfo_all_blocks=1 00:37:51.286 --rc geninfo_unexecuted_blocks=1 00:37:51.286 00:37:51.286 ' 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.286 --rc genhtml_branch_coverage=1 00:37:51.286 --rc genhtml_function_coverage=1 00:37:51.286 --rc genhtml_legend=1 00:37:51.286 --rc geninfo_all_blocks=1 00:37:51.286 --rc geninfo_unexecuted_blocks=1 00:37:51.286 00:37:51.286 ' 00:37:51.286 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:51.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.286 --rc genhtml_branch_coverage=1 00:37:51.286 --rc genhtml_function_coverage=1 00:37:51.286 --rc genhtml_legend=1 00:37:51.286 --rc geninfo_all_blocks=1 00:37:51.287 --rc geninfo_unexecuted_blocks=1 00:37:51.287 00:37:51.287 ' 00:37:51.287 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:51.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.287 --rc genhtml_branch_coverage=1 00:37:51.287 --rc genhtml_function_coverage=1 00:37:51.287 --rc genhtml_legend=1 00:37:51.287 --rc geninfo_all_blocks=1 00:37:51.287 --rc geninfo_unexecuted_blocks=1 00:37:51.287 00:37:51.287 ' 00:37:51.287 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:51.548 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:51.548 ************************************ 00:37:51.548 START TEST nvmf_abort 00:37:51.548 ************************************ 00:37:51.549 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:51.549 * Looking for test storage... 00:37:51.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:51.549 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:51.549 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:37:51.549 22:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.811 --rc genhtml_branch_coverage=1 00:37:51.811 --rc genhtml_function_coverage=1 00:37:51.811 --rc genhtml_legend=1 00:37:51.811 --rc geninfo_all_blocks=1 00:37:51.811 --rc geninfo_unexecuted_blocks=1 00:37:51.811 00:37:51.811 ' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.811 --rc genhtml_branch_coverage=1 00:37:51.811 --rc genhtml_function_coverage=1 00:37:51.811 --rc genhtml_legend=1 00:37:51.811 --rc geninfo_all_blocks=1 00:37:51.811 --rc geninfo_unexecuted_blocks=1 00:37:51.811 00:37:51.811 ' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.811 --rc genhtml_branch_coverage=1 00:37:51.811 --rc genhtml_function_coverage=1 00:37:51.811 --rc genhtml_legend=1 00:37:51.811 --rc geninfo_all_blocks=1 00:37:51.811 --rc geninfo_unexecuted_blocks=1 00:37:51.811 00:37:51.811 ' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.811 --rc genhtml_branch_coverage=1 00:37:51.811 --rc genhtml_function_coverage=1 00:37:51.811 --rc genhtml_legend=1 00:37:51.811 --rc geninfo_all_blocks=1 00:37:51.811 --rc geninfo_unexecuted_blocks=1 00:37:51.811 00:37:51.811 ' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.811 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.812 22:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:37:59.956 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:59.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:59.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:59.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:59.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:59.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:37:59.957 00:37:59.957 --- 10.0.0.2 ping statistics --- 00:37:59.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.957 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:59.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:37:59.957 00:37:59.957 --- 10.0.0.1 ping statistics --- 00:37:59.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.957 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3784424 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3784424 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3784424 ']' 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:59.957 22:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.957 [2024-10-12 22:26:17.502321] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:59.957 [2024-10-12 22:26:17.503436] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:59.957 [2024-10-12 22:26:17.503484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.958 [2024-10-12 22:26:17.592281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:59.958 [2024-10-12 22:26:17.640992] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.958 [2024-10-12 22:26:17.641047] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.958 [2024-10-12 22:26:17.641056] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.958 [2024-10-12 22:26:17.641063] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.958 [2024-10-12 22:26:17.641069] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.958 [2024-10-12 22:26:17.641229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.958 [2024-10-12 22:26:17.641495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.958 [2024-10-12 22:26:17.641496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.958 [2024-10-12 22:26:17.728017] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.958 [2024-10-12 22:26:17.728166] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.958 [2024-10-12 22:26:17.728724] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.958 [2024-10-12 22:26:17.728775] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.958 [2024-10-12 22:26:18.366585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.958 Malloc0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.958 Delay0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.958 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.219 [2024-10-12 22:26:18.454542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.219 22:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:00.219 [2024-10-12 22:26:18.585790] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:02.763 Initializing NVMe Controllers 00:38:02.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:02.764 controller IO queue size 128 less than required 00:38:02.764 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:02.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:02.764 Initialization complete. Launching workers. 00:38:02.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28828 00:38:02.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28885, failed to submit 66 00:38:02.764 success 28828, unsuccessful 57, failed 0 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:02.764 rmmod nvme_tcp 00:38:02.764 rmmod nvme_fabrics 00:38:02.764 rmmod nvme_keyring 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3784424 ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3784424 ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3784424' 00:38:02.764 killing process with pid 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3784424 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:02.764 22:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:04.680 00:38:04.680 real 0m13.189s 00:38:04.680 user 0m10.672s 00:38:04.680 sys 0m6.889s 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:04.680 ************************************ 00:38:04.680 END TEST nvmf_abort 00:38:04.680 ************************************ 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:04.680 ************************************ 00:38:04.680 START TEST nvmf_ns_hotplug_stress 00:38:04.680 ************************************ 00:38:04.680 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:04.942 * Looking for test storage... 00:38:04.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:04.942 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:04.942 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:38:04.942 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:04.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.943 --rc genhtml_branch_coverage=1 00:38:04.943 --rc genhtml_function_coverage=1 00:38:04.943 --rc genhtml_legend=1 00:38:04.943 --rc geninfo_all_blocks=1 00:38:04.943 --rc geninfo_unexecuted_blocks=1 00:38:04.943 00:38:04.943 ' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:04.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.943 --rc genhtml_branch_coverage=1 00:38:04.943 --rc genhtml_function_coverage=1 00:38:04.943 --rc genhtml_legend=1 00:38:04.943 --rc geninfo_all_blocks=1 00:38:04.943 --rc geninfo_unexecuted_blocks=1 00:38:04.943 00:38:04.943 ' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:04.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.943 --rc genhtml_branch_coverage=1 00:38:04.943 --rc genhtml_function_coverage=1 00:38:04.943 --rc genhtml_legend=1 00:38:04.943 --rc geninfo_all_blocks=1 00:38:04.943 --rc geninfo_unexecuted_blocks=1 00:38:04.943 00:38:04.943 ' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:04.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.943 --rc genhtml_branch_coverage=1 00:38:04.943 --rc genhtml_function_coverage=1 00:38:04.943 --rc genhtml_legend=1 00:38:04.943 --rc geninfo_all_blocks=1 00:38:04.943 --rc geninfo_unexecuted_blocks=1 00:38:04.943 00:38:04.943 ' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.943 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:04.944 22:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:13.085 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:13.085 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:13.085 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:13.085 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.085 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:38:13.086 00:38:13.086 --- 10.0.0.2 ping statistics --- 00:38:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.086 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:38:13.086 00:38:13.086 --- 10.0.0.1 ping statistics --- 00:38:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.086 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3789171 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3789171 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3789171 ']' 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:13.086 22:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:13.086 [2024-10-12 22:26:30.831429] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:13.086 [2024-10-12 22:26:30.832598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:13.086 [2024-10-12 22:26:30.832649] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.086 [2024-10-12 22:26:30.924325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:13.086 [2024-10-12 22:26:30.970965] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.086 [2024-10-12 22:26:30.971024] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.086 [2024-10-12 22:26:30.971032] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.086 [2024-10-12 22:26:30.971040] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.086 [2024-10-12 22:26:30.971046] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.086 [2024-10-12 22:26:30.971204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.086 [2024-10-12 22:26:30.971478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.086 [2024-10-12 22:26:30.971478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:38:13.086 [2024-10-12 22:26:31.050570] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.086 [2024-10-12 22:26:31.051678] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:13.086 [2024-10-12 22:26:31.052058] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.086 [2024-10-12 22:26:31.052199] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:13.347 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:13.608 [2024-10-12 22:26:31.868544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:13.608 22:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:13.868 22:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:13.868 [2024-10-12 22:26:32.281392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.868 22:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:14.129 22:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:14.390 Malloc0 00:38:14.390 22:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:14.651 Delay0 00:38:14.651 22:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.651 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:14.911 NULL1 00:38:14.911 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:15.172 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3789810 00:38:15.172 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:15.172 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:15.172 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.432 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.432 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:15.432 22:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:15.693 true 00:38:15.693 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:15.693 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.954 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.215 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:16.215 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:16.215 true 00:38:16.215 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:16.216 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.476 22:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.737 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:16.737 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:16.998 true 00:38:16.998 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:16.998 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.998 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.259 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:17.259 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:17.519 true 00:38:17.519 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:17.519 22:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.780 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.041 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:18.041 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:18.041 true 00:38:18.041 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:18.041 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.301 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.562 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:18.562 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:18.562 true 00:38:18.562 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:18.562 22:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.822 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.082 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:19.082 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:19.082 true 00:38:19.082 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:19.082 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.343 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.606 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:19.606 22:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:19.606 true 00:38:19.606 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:19.606 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.866 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.125 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:20.125 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:20.125 true 00:38:20.385 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:20.385 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.385 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.645 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:20.645 22:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:20.906 true 00:38:20.906 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:20.906 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.906 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.166 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:21.166 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:21.427 true 00:38:21.427 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:21.427 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.427 22:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:21.687 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:21.687 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:21.948 true 00:38:21.948 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:21.948 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.210 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.210 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:22.210 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:22.472 true 00:38:22.472 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:22.472 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.731 22:26:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.732 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:22.732 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:22.991 true 00:38:22.991 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:22.991 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.252 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.252 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:23.252 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:23.511 true 00:38:23.511 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:23.511 22:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.771 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.031 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:24.031 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:24.031 true 00:38:24.031 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:24.031 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.291 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.551 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:24.551 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:24.551 true 00:38:24.551 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:24.551 22:26:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.811 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.072 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:25.072 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:25.072 true 00:38:25.072 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:25.072 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.333 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:25.593 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:25.593 22:26:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:25.593 true 00:38:25.853 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:25.853 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:25.853 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.113 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:26.113 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:26.374 true 00:38:26.374 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:26.374 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.374 22:26:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.634 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:26.634 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:26.894 true 00:38:26.894 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:26.894 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.155 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.155 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:27.155 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:27.415 true 00:38:27.415 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:27.415 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.676 22:26:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.676 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:27.676 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:27.937 true 00:38:27.937 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:27.937 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.197 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.458 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:28.458 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:28.459 true 00:38:28.459 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:28.459 22:26:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:28.720 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.982 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:28.982 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:28.982 true 00:38:28.982 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:28.982 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.242 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.503 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:29.503 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:29.503 true 00:38:29.503 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:29.503 22:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.762 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.023 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:30.023 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:30.023 true 00:38:30.283 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:30.283 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.283 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:30.543 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:30.544 22:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:30.805 true 00:38:30.805 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:30.805 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.805 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.065 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:31.066 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:31.327 true 00:38:31.327 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:31.327 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.587 22:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:31.587 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:31.587 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:31.849 true 00:38:31.849 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:31.849 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.110 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.110 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:38:32.110 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:38:32.372 true 00:38:32.372 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:32.372 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.634 22:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.634 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:38:32.634 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:38:32.895 true 00:38:32.895 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:32.895 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.156 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.418 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:38:33.418 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:38:33.418 true 00:38:33.418 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:33.418 22:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.680 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:33.941 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:38:33.941 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:38:33.941 true 00:38:33.941 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:33.941 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.202 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.464 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:38:34.464 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:38:34.464 true 00:38:34.464 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:34.464 22:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.724 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:34.986 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:38:34.986 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:38:34.986 true 00:38:35.247 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:35.247 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.247 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:35.508 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:38:35.508 22:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:38:35.770 true 00:38:35.770 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:35.770 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.770 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.031 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:38:36.031 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:38:36.291 true 00:38:36.291 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:36.291 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.291 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:36.551 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:38:36.552 22:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:38:36.812 true 00:38:36.812 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:36.812 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.072 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.072 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:38:37.072 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:38:37.333 true 00:38:37.333 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:37.333 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:37.593 22:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.593 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:38:37.593 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:38:37.854 true 00:38:37.854 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:37.854 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.115 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.376 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:38:38.376 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:38:38.376 true 00:38:38.376 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:38.376 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.637 22:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.898 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:38:38.898 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:38:38.898 true 00:38:38.898 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:38.898 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.158 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.419 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:38:39.419 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:38:39.419 true 00:38:39.419 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:39.419 22:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.679 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.939 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:38:39.939 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:38:40.199 true 00:38:40.199 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:40.199 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.199 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.460 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:38:40.460 22:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:38:40.720 true 00:38:40.720 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:40.720 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.720 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.980 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:38:40.980 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:38:41.241 true 00:38:41.241 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:41.241 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.515 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.515 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:38:41.515 22:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:38:41.828 true 00:38:41.828 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:41.828 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.828 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.108 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:38:42.108 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:38:42.432 true 00:38:42.432 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:42.432 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.432 22:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.694 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:38:42.694 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:38:42.954 true 00:38:42.954 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:42.954 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.954 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:43.215 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:38:43.215 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:38:43.476 true 00:38:43.476 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:43.476 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.476 22:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:43.736 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:38:43.736 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:38:43.995 true 00:38:43.995 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:43.995 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.255 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:44.255 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:38:44.255 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:38:44.515 true 00:38:44.515 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:44.515 22:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.775 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:44.775 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:38:44.775 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:38:45.036 true 00:38:45.036 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:45.036 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.297 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:45.297 Initializing NVMe Controllers 00:38:45.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:45.297 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:38:45.297 Controller IO queue size 128, less than required. 00:38:45.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:45.297 WARNING: Some requested NVMe devices were skipped 00:38:45.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:45.297 Initialization complete. Launching workers. 00:38:45.297 ======================================================== 00:38:45.297 Latency(us) 00:38:45.297 Device Information : IOPS MiB/s Average min max 00:38:45.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30421.83 14.85 4207.44 1179.63 11631.44 00:38:45.297 ======================================================== 00:38:45.297 Total : 30421.83 14.85 4207.44 1179.63 11631.44 00:38:45.297 00:38:45.297 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:38:45.297 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:38:45.557 true 00:38:45.557 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3789810 00:38:45.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3789810) - No such process 00:38:45.557 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3789810 00:38:45.557 22:27:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.817 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:46.077 null0 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.077 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:46.337 null1 00:38:46.337 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:46.337 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.337 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:46.337 null2 00:38:46.597 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:46.597 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.597 22:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:46.597 null3 00:38:46.597 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:46.597 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.597 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:46.858 null4 00:38:46.858 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:46.858 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:46.858 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:47.118 null5 00:38:47.118 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:47.118 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:47.118 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:47.118 null6 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:47.380 null7 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3796098 3796099 3796101 3796103 3796105 3796108 3796110 3796112 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.380 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.642 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.642 22:27:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:47.642 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:47.903 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.164 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.425 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:48.686 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:48.947 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.209 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.472 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.732 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.732 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.732 22:27:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:49.733 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:49.994 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:49.995 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.255 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:50.255 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.256 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:50.516 22:27:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:50.777 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.038 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:51.299 rmmod nvme_tcp 00:38:51.299 rmmod nvme_fabrics 00:38:51.299 rmmod nvme_keyring 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3789171 ']' 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3789171 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3789171 ']' 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3789171 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3789171 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3789171' 00:38:51.299 killing process with pid 3789171 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3789171 00:38:51.299 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3789171 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:51.559 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:51.560 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:51.560 22:27:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.103 22:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:54.103 00:38:54.103 real 0m48.859s 00:38:54.103 user 3m0.357s 00:38:54.103 sys 0m23.188s 00:38:54.103 22:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:54.103 22:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:54.103 ************************************ 00:38:54.103 END TEST nvmf_ns_hotplug_stress 00:38:54.103 ************************************ 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:54.103 ************************************ 00:38:54.103 START TEST nvmf_delete_subsystem 00:38:54.103 ************************************ 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:54.103 * Looking for test storage... 00:38:54.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:54.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.103 --rc genhtml_branch_coverage=1 00:38:54.103 --rc genhtml_function_coverage=1 00:38:54.103 --rc genhtml_legend=1 00:38:54.103 --rc geninfo_all_blocks=1 00:38:54.103 --rc geninfo_unexecuted_blocks=1 00:38:54.103 00:38:54.103 ' 00:38:54.103 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:54.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.103 --rc genhtml_branch_coverage=1 00:38:54.103 --rc genhtml_function_coverage=1 00:38:54.103 --rc genhtml_legend=1 00:38:54.103 --rc geninfo_all_blocks=1 00:38:54.103 --rc geninfo_unexecuted_blocks=1 00:38:54.104 00:38:54.104 ' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.104 --rc genhtml_branch_coverage=1 00:38:54.104 --rc genhtml_function_coverage=1 00:38:54.104 --rc genhtml_legend=1 00:38:54.104 --rc geninfo_all_blocks=1 00:38:54.104 --rc geninfo_unexecuted_blocks=1 00:38:54.104 00:38:54.104 ' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:54.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:54.104 --rc genhtml_branch_coverage=1 00:38:54.104 --rc genhtml_function_coverage=1 00:38:54.104 --rc genhtml_legend=1 00:38:54.104 --rc geninfo_all_blocks=1 00:38:54.104 --rc geninfo_unexecuted_blocks=1 00:38:54.104 00:38:54.104 ' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:54.104 22:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:02.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:02.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:02.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:02.244 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:02.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:02.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:39:02.245 00:39:02.245 --- 10.0.0.2 ping statistics --- 00:39:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.245 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:02.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:39:02.245 00:39:02.245 --- 10.0.0.1 ping statistics --- 00:39:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.245 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3801482 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3801482 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3801482 ']' 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:02.245 22:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 [2024-10-12 22:27:19.572965] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:02.245 [2024-10-12 22:27:19.573927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:02.245 [2024-10-12 22:27:19.573963] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.245 [2024-10-12 22:27:19.656550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:02.245 [2024-10-12 22:27:19.688013] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.245 [2024-10-12 22:27:19.688051] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.245 [2024-10-12 22:27:19.688061] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.245 [2024-10-12 22:27:19.688069] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.245 [2024-10-12 22:27:19.688076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.245 [2024-10-12 22:27:19.688165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.245 [2024-10-12 22:27:19.688182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.245 [2024-10-12 22:27:19.736444] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:02.245 [2024-10-12 22:27:19.737126] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:02.245 [2024-10-12 22:27:19.737408] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 [2024-10-12 22:27:20.405199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.245 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.245 [2024-10-12 22:27:20.445758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.246 NULL1 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.246 Delay0 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3801739 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:02.246 22:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:02.246 [2024-10-12 22:27:20.553964] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:04.156 22:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:04.156 22:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.156 22:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Write completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 starting I/O failed: -6 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.156 Read completed with error (sct=0, sc=8) 00:39:04.157 [2024-10-12 22:27:22.636645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d3ed0 is same with the state(6) to be set 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 starting I/O failed: -6 00:39:04.157 [2024-10-12 22:27:22.639408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d50000c00 is same with the state(6) to be set 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Write completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:04.157 Read completed with error (sct=0, sc=8) 00:39:05.543 [2024-10-12 22:27:23.611791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d1b20 is same with the state(6) to be set 00:39:05.543 Write completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.543 Write completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.543 Write completed with error (sct=0, sc=8) 00:39:05.543 Write completed with error (sct=0, sc=8) 00:39:05.543 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 [2024-10-12 22:27:23.640149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d40b0 is same with the state(6) to be set 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 [2024-10-12 22:27:23.640853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d2c50 is same with the state(6) to be set 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 [2024-10-12 22:27:23.641654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d5000cfe0 is same with the state(6) to be set 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Write completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 Read completed with error (sct=0, sc=8) 00:39:05.544 [2024-10-12 22:27:23.641903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d5000d780 is same with the state(6) to be set 00:39:05.544 Initializing NVMe Controllers 00:39:05.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:05.544 Controller IO queue size 128, less than required. 00:39:05.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:05.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:05.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:05.544 Initialization complete. Launching workers. 00:39:05.544 ======================================================== 00:39:05.544 Latency(us) 00:39:05.544 Device Information : IOPS MiB/s Average min max 00:39:05.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.15 0.08 892715.98 228.63 1008303.51 00:39:05.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.73 0.08 930923.37 291.38 1011094.75 00:39:05.544 ======================================================== 00:39:05.544 Total : 324.88 0.16 910912.76 228.63 1011094.75 00:39:05.544 00:39:05.544 [2024-10-12 22:27:23.642618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d1b20 (9): Bad file descriptor 00:39:05.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:05.544 22:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.544 22:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:05.544 22:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3801739 00:39:05.544 22:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3801739 00:39:05.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3801739) - No such process 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3801739 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3801739 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3801739 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:05.805 [2024-10-12 22:27:24.177708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3802418 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:05.805 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:05.805 [2024-10-12 22:27:24.265228] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:06.376 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:06.376 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:06.376 22:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:06.947 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:06.947 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:06.947 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:07.519 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:07.519 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:07.519 22:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:07.780 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:07.780 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:07.780 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:08.352 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:08.352 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:08.352 22:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:08.923 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:08.923 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:08.923 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:09.185 Initializing NVMe Controllers 00:39:09.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:09.185 Controller IO queue size 128, less than required. 00:39:09.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:09.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:09.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:09.185 Initialization complete. Launching workers. 00:39:09.185 ======================================================== 00:39:09.185 Latency(us) 00:39:09.185 Device Information : IOPS MiB/s Average min max 00:39:09.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002139.35 1000282.85 1005541.74 00:39:09.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004186.83 1000192.36 1042595.09 00:39:09.185 ======================================================== 00:39:09.185 Total : 256.00 0.12 1003163.09 1000192.36 1042595.09 00:39:09.185 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3802418 00:39:09.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3802418) - No such process 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3802418 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:09.447 rmmod nvme_tcp 00:39:09.447 rmmod nvme_fabrics 00:39:09.447 rmmod nvme_keyring 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3801482 ']' 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3801482 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3801482 ']' 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3801482 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3801482 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3801482' 00:39:09.447 killing process with pid 3801482 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3801482 00:39:09.447 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3801482 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:09.708 22:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.622 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:11.622 00:39:11.622 real 0m17.994s 00:39:11.622 user 0m26.284s 00:39:11.622 sys 0m7.314s 00:39:11.622 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:11.622 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:11.623 ************************************ 00:39:11.623 END TEST nvmf_delete_subsystem 00:39:11.623 ************************************ 00:39:11.623 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:11.623 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:11.623 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:11.623 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:11.888 ************************************ 00:39:11.888 START TEST nvmf_host_management 00:39:11.888 ************************************ 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:11.888 * Looking for test storage... 00:39:11.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:11.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.888 --rc genhtml_branch_coverage=1 00:39:11.888 --rc genhtml_function_coverage=1 00:39:11.888 --rc genhtml_legend=1 00:39:11.888 --rc geninfo_all_blocks=1 00:39:11.888 --rc geninfo_unexecuted_blocks=1 00:39:11.888 00:39:11.888 ' 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:11.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.888 --rc genhtml_branch_coverage=1 00:39:11.888 --rc genhtml_function_coverage=1 00:39:11.888 --rc genhtml_legend=1 00:39:11.888 --rc geninfo_all_blocks=1 00:39:11.888 --rc geninfo_unexecuted_blocks=1 00:39:11.888 00:39:11.888 ' 00:39:11.888 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:11.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.888 --rc genhtml_branch_coverage=1 00:39:11.888 --rc genhtml_function_coverage=1 00:39:11.888 --rc genhtml_legend=1 00:39:11.888 --rc geninfo_all_blocks=1 00:39:11.889 --rc geninfo_unexecuted_blocks=1 00:39:11.889 00:39:11.889 ' 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:11.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.889 --rc genhtml_branch_coverage=1 00:39:11.889 --rc genhtml_function_coverage=1 00:39:11.889 --rc genhtml_legend=1 00:39:11.889 --rc geninfo_all_blocks=1 00:39:11.889 --rc geninfo_unexecuted_blocks=1 00:39:11.889 00:39:11.889 ' 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.889 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.151 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:12.152 22:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:20.297 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:20.297 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:20.297 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:20.297 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:20.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:39:20.298 00:39:20.298 --- 10.0.0.2 ping statistics --- 00:39:20.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.298 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:39:20.298 00:39:20.298 --- 10.0.0.1 ping statistics --- 00:39:20.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.298 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3807166 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3807166 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3807166 ']' 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.298 22:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.298 [2024-10-12 22:27:37.765584] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.298 [2024-10-12 22:27:37.766707] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:20.298 [2024-10-12 22:27:37.766761] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.298 [2024-10-12 22:27:37.855999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.298 [2024-10-12 22:27:37.905747] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.298 [2024-10-12 22:27:37.905801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.298 [2024-10-12 22:27:37.905811] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.298 [2024-10-12 22:27:37.905818] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.298 [2024-10-12 22:27:37.905825] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.298 [2024-10-12 22:27:37.905986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:20.299 [2024-10-12 22:27:37.906152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:39:20.299 [2024-10-12 22:27:37.906255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.299 [2024-10-12 22:27:37.906254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:39:20.299 [2024-10-12 22:27:37.984494] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:20.299 [2024-10-12 22:27:37.985606] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:20.299 [2024-10-12 22:27:37.985787] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:20.299 [2024-10-12 22:27:37.986457] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:20.299 [2024-10-12 22:27:37.986515] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 [2024-10-12 22:27:38.623229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 Malloc0 00:39:20.299 [2024-10-12 22:27:38.711556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3807468 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3807468 /var/tmp/bdevperf.sock 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3807468 ']' 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:20.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:20.299 { 00:39:20.299 "params": { 00:39:20.299 "name": "Nvme$subsystem", 00:39:20.299 "trtype": "$TEST_TRANSPORT", 00:39:20.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:20.299 "adrfam": "ipv4", 00:39:20.299 "trsvcid": "$NVMF_PORT", 00:39:20.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:20.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:20.299 "hdgst": ${hdgst:-false}, 00:39:20.299 "ddgst": ${ddgst:-false} 00:39:20.299 }, 00:39:20.299 "method": "bdev_nvme_attach_controller" 00:39:20.299 } 00:39:20.299 EOF 00:39:20.299 )") 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:39:20.299 22:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:20.299 "params": { 00:39:20.299 "name": "Nvme0", 00:39:20.299 "trtype": "tcp", 00:39:20.299 "traddr": "10.0.0.2", 00:39:20.299 "adrfam": "ipv4", 00:39:20.299 "trsvcid": "4420", 00:39:20.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:20.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:20.299 "hdgst": false, 00:39:20.299 "ddgst": false 00:39:20.299 }, 00:39:20.299 "method": "bdev_nvme_attach_controller" 00:39:20.299 }' 00:39:20.560 [2024-10-12 22:27:38.816768] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:20.560 [2024-10-12 22:27:38.816823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807468 ] 00:39:20.560 [2024-10-12 22:27:38.896034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.560 [2024-10-12 22:27:38.928122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.822 Running I/O for 10 seconds... 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.397 [2024-10-12 22:27:39.678865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.678998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 [2024-10-12 22:27:39.679128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238a060 is same with the state(6) to be set 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.397 [2024-10-12 22:27:39.684252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:21.397 [2024-10-12 22:27:39.684294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.397 [2024-10-12 22:27:39.684306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:21.397 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:21.397 [2024-10-12 22:27:39.684314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.397 [2024-10-12 22:27:39.684325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:21.397 [2024-10-12 22:27:39.684333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.397 [2024-10-12 22:27:39.684341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:21.398 [2024-10-12 22:27:39.684348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ced0 is same with the state(6) to be set 00:39:21.398 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.398 [2024-10-12 22:27:39.684814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:21.398 [2024-10-12 22:27:39.684904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.684988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.684998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.398 [2024-10-12 22:27:39.685435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.398 [2024-10-12 22:27:39.685444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:21.399 [2024-10-12 22:27:39.685924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.685985] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f95f20 was disconnected and freed. reset controller. 00:39:21.399 [2024-10-12 22:27:39.687178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:21.399 task offset: 115840 on job bdev=Nvme0n1 fails 00:39:21.399 00:39:21.399 Latency(us) 00:39:21.399 [2024-10-12T20:27:39.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.399 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:21.399 Job: Nvme0n1 ended in about 0.58 seconds with error 00:39:21.399 Verification LBA range: start 0x0 length 0x400 00:39:21.399 Nvme0n1 : 0.58 1543.74 96.48 110.27 0.00 37773.46 2143.57 35389.44 00:39:21.399 [2024-10-12T20:27:39.888Z] =================================================================================================================== 00:39:21.399 [2024-10-12T20:27:39.888Z] Total : 1543.74 96.48 110.27 0.00 37773.46 2143.57 35389.44 00:39:21.399 [2024-10-12 22:27:39.689249] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:21.399 [2024-10-12 22:27:39.689271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ced0 (9): Bad file descriptor 00:39:21.399 [2024-10-12 22:27:39.690310] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:39:21.399 [2024-10-12 22:27:39.690395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:39:21.399 [2024-10-12 22:27:39.690414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:21.399 [2024-10-12 22:27:39.690427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:39:21.399 [2024-10-12 22:27:39.690439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:39:21.399 [2024-10-12 22:27:39.690447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:21.399 [2024-10-12 22:27:39.690454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d7ced0 00:39:21.399 [2024-10-12 22:27:39.690473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7ced0 (9): Bad file descriptor 00:39:21.399 [2024-10-12 22:27:39.690485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:21.400 [2024-10-12 22:27:39.690492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:39:21.400 [2024-10-12 22:27:39.690500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:21.400 [2024-10-12 22:27:39.690512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:39:21.400 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.400 22:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3807468 00:39:22.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3807468) - No such process 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:39:22.345 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:39:22.345 { 00:39:22.345 "params": { 00:39:22.345 "name": "Nvme$subsystem", 00:39:22.345 "trtype": "$TEST_TRANSPORT", 00:39:22.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.346 "adrfam": "ipv4", 00:39:22.346 "trsvcid": "$NVMF_PORT", 00:39:22.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.346 "hdgst": ${hdgst:-false}, 00:39:22.346 "ddgst": ${ddgst:-false} 00:39:22.346 }, 00:39:22.346 "method": "bdev_nvme_attach_controller" 00:39:22.346 } 00:39:22.346 EOF 00:39:22.346 )") 00:39:22.346 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:39:22.346 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:39:22.346 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:39:22.346 22:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:39:22.346 "params": { 00:39:22.346 "name": "Nvme0", 00:39:22.346 "trtype": "tcp", 00:39:22.346 "traddr": "10.0.0.2", 00:39:22.346 "adrfam": "ipv4", 00:39:22.346 "trsvcid": "4420", 00:39:22.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:22.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:22.346 "hdgst": false, 00:39:22.346 "ddgst": false 00:39:22.346 }, 00:39:22.346 "method": "bdev_nvme_attach_controller" 00:39:22.346 }' 00:39:22.346 [2024-10-12 22:27:40.756266] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:22.346 [2024-10-12 22:27:40.756327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807816 ] 00:39:22.607 [2024-10-12 22:27:40.835007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.607 [2024-10-12 22:27:40.865525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.607 Running I/O for 1 seconds... 00:39:23.995 2024.00 IOPS, 126.50 MiB/s 00:39:23.995 Latency(us) 00:39:23.995 [2024-10-12T20:27:42.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.995 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:23.995 Verification LBA range: start 0x0 length 0x400 00:39:23.995 Nvme0n1 : 1.01 2062.18 128.89 0.00 0.00 30351.35 1645.23 31020.37 00:39:23.995 [2024-10-12T20:27:42.484Z] =================================================================================================================== 00:39:23.995 [2024-10-12T20:27:42.484Z] Total : 2062.18 128.89 0.00 0.00 30351.35 1645.23 31020.37 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:23.995 rmmod nvme_tcp 00:39:23.995 rmmod nvme_fabrics 00:39:23.995 rmmod nvme_keyring 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3807166 ']' 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3807166 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3807166 ']' 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3807166 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3807166 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3807166' 00:39:23.995 killing process with pid 3807166 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3807166 00:39:23.995 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3807166 00:39:23.996 [2024-10-12 22:27:42.420749] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.996 22:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.617 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:26.618 00:39:26.618 real 0m14.382s 00:39:26.618 user 0m18.767s 00:39:26.618 sys 0m7.263s 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:26.618 ************************************ 00:39:26.618 END TEST nvmf_host_management 00:39:26.618 ************************************ 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:26.618 ************************************ 00:39:26.618 START TEST nvmf_lvol 00:39:26.618 ************************************ 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:26.618 * Looking for test storage... 00:39:26.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.618 --rc genhtml_branch_coverage=1 00:39:26.618 --rc genhtml_function_coverage=1 00:39:26.618 --rc genhtml_legend=1 00:39:26.618 --rc geninfo_all_blocks=1 00:39:26.618 --rc geninfo_unexecuted_blocks=1 00:39:26.618 00:39:26.618 ' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.618 --rc genhtml_branch_coverage=1 00:39:26.618 --rc genhtml_function_coverage=1 00:39:26.618 --rc genhtml_legend=1 00:39:26.618 --rc geninfo_all_blocks=1 00:39:26.618 --rc geninfo_unexecuted_blocks=1 00:39:26.618 00:39:26.618 ' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.618 --rc genhtml_branch_coverage=1 00:39:26.618 --rc genhtml_function_coverage=1 00:39:26.618 --rc genhtml_legend=1 00:39:26.618 --rc geninfo_all_blocks=1 00:39:26.618 --rc geninfo_unexecuted_blocks=1 00:39:26.618 00:39:26.618 ' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:26.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.618 --rc genhtml_branch_coverage=1 00:39:26.618 --rc genhtml_function_coverage=1 00:39:26.618 --rc genhtml_legend=1 00:39:26.618 --rc geninfo_all_blocks=1 00:39:26.618 --rc geninfo_unexecuted_blocks=1 00:39:26.618 00:39:26.618 ' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.618 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:26.619 22:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:34.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:34.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:34.763 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:34.764 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:34.764 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:34.764 22:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:34.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:39:34.764 00:39:34.764 --- 10.0.0.2 ping statistics --- 00:39:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.764 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:34.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:39:34.764 00:39:34.764 --- 10.0.0.1 ping statistics --- 00:39:34.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.764 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3812161 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3812161 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3812161 ']' 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:34.764 [2024-10-12 22:27:52.153266] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:34.764 [2024-10-12 22:27:52.154227] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:34.764 [2024-10-12 22:27:52.154267] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.764 [2024-10-12 22:27:52.237644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:34.764 [2024-10-12 22:27:52.269424] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.764 [2024-10-12 22:27:52.269460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.764 [2024-10-12 22:27:52.269468] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.764 [2024-10-12 22:27:52.269475] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.764 [2024-10-12 22:27:52.269480] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.764 [2024-10-12 22:27:52.269616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.764 [2024-10-12 22:27:52.269764] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.764 [2024-10-12 22:27:52.269766] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:34.764 [2024-10-12 22:27:52.334927] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:34.764 [2024-10-12 22:27:52.334994] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:34.764 [2024-10-12 22:27:52.335651] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:34.764 [2024-10-12 22:27:52.335890] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.764 22:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:34.764 [2024-10-12 22:27:53.146651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.764 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.026 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:35.026 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.287 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:35.288 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:35.549 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:35.549 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0d5ffa1b-4f1a-40af-a8ad-6495212052d2 00:39:35.549 22:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d5ffa1b-4f1a-40af-a8ad-6495212052d2 lvol 20 00:39:35.810 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f87e4ed6-bab3-4031-aaaf-07a1dd465e9a 00:39:35.810 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:36.071 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f87e4ed6-bab3-4031-aaaf-07a1dd465e9a 00:39:36.071 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:36.332 [2024-10-12 22:27:54.630434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.332 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:36.332 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3812856 00:39:36.332 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:36.332 22:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:37.716 22:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f87e4ed6-bab3-4031-aaaf-07a1dd465e9a MY_SNAPSHOT 00:39:37.716 22:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7da0fe6d-4fa4-44f6-a33f-b4c0f34227dd 00:39:37.716 22:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f87e4ed6-bab3-4031-aaaf-07a1dd465e9a 30 00:39:37.977 22:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7da0fe6d-4fa4-44f6-a33f-b4c0f34227dd MY_CLONE 00:39:38.238 22:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=35334005-3594-478b-8778-95226cfc23e6 00:39:38.238 22:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 35334005-3594-478b-8778-95226cfc23e6 00:39:38.807 22:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3812856 00:39:46.928 Initializing NVMe Controllers 00:39:46.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:46.928 Controller IO queue size 128, less than required. 00:39:46.928 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:46.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:46.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:46.928 Initialization complete. Launching workers. 00:39:46.928 ======================================================== 00:39:46.928 Latency(us) 00:39:46.928 Device Information : IOPS MiB/s Average min max 00:39:46.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15520.80 60.63 8248.86 3983.16 77561.65 00:39:46.928 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15567.30 60.81 8224.58 929.77 77164.19 00:39:46.928 ======================================================== 00:39:46.928 Total : 31088.10 121.44 8236.70 929.77 77561.65 00:39:46.928 00:39:46.928 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:46.928 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f87e4ed6-bab3-4031-aaaf-07a1dd465e9a 00:39:47.188 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d5ffa1b-4f1a-40af-a8ad-6495212052d2 00:39:47.188 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:47.449 rmmod nvme_tcp 00:39:47.449 rmmod nvme_fabrics 00:39:47.449 rmmod nvme_keyring 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3812161 ']' 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3812161 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3812161 ']' 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3812161 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3812161 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3812161' 00:39:47.449 killing process with pid 3812161 00:39:47.449 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3812161 00:39:47.450 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3812161 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:47.710 22:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:49.627 00:39:49.627 real 0m23.426s 00:39:49.627 user 0m55.295s 00:39:49.627 sys 0m10.590s 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:49.627 ************************************ 00:39:49.627 END TEST nvmf_lvol 00:39:49.627 ************************************ 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:49.627 ************************************ 00:39:49.627 START TEST nvmf_lvs_grow 00:39:49.627 ************************************ 00:39:49.627 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:49.888 * Looking for test storage... 00:39:49.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:49.888 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:49.888 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:39:49.888 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.889 --rc genhtml_branch_coverage=1 00:39:49.889 --rc genhtml_function_coverage=1 00:39:49.889 --rc genhtml_legend=1 00:39:49.889 --rc geninfo_all_blocks=1 00:39:49.889 --rc geninfo_unexecuted_blocks=1 00:39:49.889 00:39:49.889 ' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.889 --rc genhtml_branch_coverage=1 00:39:49.889 --rc genhtml_function_coverage=1 00:39:49.889 --rc genhtml_legend=1 00:39:49.889 --rc geninfo_all_blocks=1 00:39:49.889 --rc geninfo_unexecuted_blocks=1 00:39:49.889 00:39:49.889 ' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.889 --rc genhtml_branch_coverage=1 00:39:49.889 --rc genhtml_function_coverage=1 00:39:49.889 --rc genhtml_legend=1 00:39:49.889 --rc geninfo_all_blocks=1 00:39:49.889 --rc geninfo_unexecuted_blocks=1 00:39:49.889 00:39:49.889 ' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.889 --rc genhtml_branch_coverage=1 00:39:49.889 --rc genhtml_function_coverage=1 00:39:49.889 --rc genhtml_legend=1 00:39:49.889 --rc geninfo_all_blocks=1 00:39:49.889 --rc geninfo_unexecuted_blocks=1 00:39:49.889 00:39:49.889 ' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:49.889 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:49.890 22:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:58.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:58.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:58.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:58.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:58.030 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:58.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:58.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:39:58.031 00:39:58.031 --- 10.0.0.2 ping statistics --- 00:39:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.031 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:58.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:58.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:39:58.031 00:39:58.031 --- 10.0.0.1 ping statistics --- 00:39:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.031 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3818872 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3818872 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3818872 ']' 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:58.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.031 [2024-10-12 22:28:15.724554] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:58.031 [2024-10-12 22:28:15.725561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:58.031 [2024-10-12 22:28:15.725599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:58.031 [2024-10-12 22:28:15.787574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.031 [2024-10-12 22:28:15.817626] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:58.031 [2024-10-12 22:28:15.817662] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:58.031 [2024-10-12 22:28:15.817669] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:58.031 [2024-10-12 22:28:15.817674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:58.031 [2024-10-12 22:28:15.817678] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:58.031 [2024-10-12 22:28:15.817702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.031 [2024-10-12 22:28:15.863124] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:58.031 [2024-10-12 22:28:15.863318] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:58.031 22:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:58.031 [2024-10-12 22:28:16.082481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:58.031 ************************************ 00:39:58.031 START TEST lvs_grow_clean 00:39:58.031 ************************************ 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:58.031 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:58.293 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 lvol 150 00:39:58.553 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9fd4473d-f162-4524-9fab-16ee5ce0fcef 00:39:58.553 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:58.553 22:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:58.814 [2024-10-12 22:28:17.086167] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:58.814 [2024-10-12 22:28:17.086328] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:58.814 true 00:39:58.814 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:58.814 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:39:58.814 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:58.814 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:59.074 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9fd4473d-f162-4524-9fab-16ee5ce0fcef 00:39:59.335 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:59.336 [2024-10-12 22:28:17.782858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:59.336 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3819349 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3819349 /var/tmp/bdevperf.sock 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3819349 ']' 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:59.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:59.597 22:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:59.612 [2024-10-12 22:28:18.003562] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:59.612 [2024-10-12 22:28:18.003629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819349 ] 00:39:59.873 [2024-10-12 22:28:18.086797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.873 [2024-10-12 22:28:18.134001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.444 22:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:00.444 22:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:40:00.444 22:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:01.015 Nvme0n1 00:40:01.015 22:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:01.015 [ 00:40:01.015 { 00:40:01.015 "name": "Nvme0n1", 00:40:01.015 "aliases": [ 00:40:01.015 "9fd4473d-f162-4524-9fab-16ee5ce0fcef" 00:40:01.015 ], 00:40:01.015 "product_name": "NVMe disk", 00:40:01.015 "block_size": 4096, 00:40:01.015 "num_blocks": 38912, 00:40:01.015 "uuid": "9fd4473d-f162-4524-9fab-16ee5ce0fcef", 00:40:01.015 "numa_id": 0, 00:40:01.015 "assigned_rate_limits": { 00:40:01.015 "rw_ios_per_sec": 0, 00:40:01.015 "rw_mbytes_per_sec": 0, 00:40:01.015 "r_mbytes_per_sec": 0, 00:40:01.015 "w_mbytes_per_sec": 0 00:40:01.015 }, 00:40:01.015 "claimed": false, 00:40:01.015 "zoned": false, 00:40:01.015 "supported_io_types": { 00:40:01.015 "read": true, 00:40:01.015 "write": true, 00:40:01.015 "unmap": true, 00:40:01.015 "flush": true, 00:40:01.015 "reset": true, 00:40:01.015 "nvme_admin": true, 00:40:01.015 "nvme_io": true, 00:40:01.015 "nvme_io_md": false, 00:40:01.015 "write_zeroes": true, 00:40:01.015 "zcopy": false, 00:40:01.015 "get_zone_info": false, 00:40:01.015 "zone_management": false, 00:40:01.015 "zone_append": false, 00:40:01.015 "compare": true, 00:40:01.015 "compare_and_write": true, 00:40:01.015 "abort": true, 00:40:01.015 "seek_hole": false, 00:40:01.015 "seek_data": false, 00:40:01.015 "copy": true, 00:40:01.015 "nvme_iov_md": false 00:40:01.015 }, 00:40:01.015 "memory_domains": [ 00:40:01.015 { 00:40:01.015 "dma_device_id": "system", 00:40:01.015 "dma_device_type": 1 00:40:01.015 } 00:40:01.015 ], 00:40:01.015 "driver_specific": { 00:40:01.015 "nvme": [ 00:40:01.015 { 00:40:01.015 "trid": { 00:40:01.015 "trtype": "TCP", 00:40:01.015 "adrfam": "IPv4", 00:40:01.015 "traddr": "10.0.0.2", 00:40:01.015 "trsvcid": "4420", 00:40:01.015 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:01.015 }, 00:40:01.015 "ctrlr_data": { 00:40:01.015 "cntlid": 1, 00:40:01.015 "vendor_id": "0x8086", 00:40:01.015 "model_number": "SPDK bdev Controller", 00:40:01.015 "serial_number": "SPDK0", 00:40:01.015 "firmware_revision": "24.09.1", 00:40:01.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:01.015 "oacs": { 00:40:01.015 "security": 0, 00:40:01.015 "format": 0, 00:40:01.015 "firmware": 0, 00:40:01.015 "ns_manage": 0 00:40:01.015 }, 00:40:01.015 "multi_ctrlr": true, 00:40:01.015 "ana_reporting": false 00:40:01.015 }, 00:40:01.015 "vs": { 00:40:01.015 "nvme_version": "1.3" 00:40:01.015 }, 00:40:01.015 "ns_data": { 00:40:01.015 "id": 1, 00:40:01.015 "can_share": true 00:40:01.015 } 00:40:01.015 } 00:40:01.015 ], 00:40:01.015 "mp_policy": "active_passive" 00:40:01.015 } 00:40:01.015 } 00:40:01.015 ] 00:40:01.015 22:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3819583 00:40:01.015 22:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:01.015 22:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:01.277 Running I/O for 10 seconds... 00:40:02.220 Latency(us) 00:40:02.220 [2024-10-12T20:28:20.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:02.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.220 Nvme0n1 : 1.00 16565.00 64.71 0.00 0.00 0.00 0.00 0.00 00:40:02.220 [2024-10-12T20:28:20.709Z] =================================================================================================================== 00:40:02.220 [2024-10-12T20:28:20.709Z] Total : 16565.00 64.71 0.00 0.00 0.00 0.00 0.00 00:40:02.220 00:40:03.162 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:03.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.162 Nvme0n1 : 2.00 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:40:03.162 [2024-10-12T20:28:21.651Z] =================================================================================================================== 00:40:03.162 [2024-10-12T20:28:21.651Z] Total : 16955.00 66.23 0.00 0.00 0.00 0.00 0.00 00:40:03.162 00:40:03.162 true 00:40:03.424 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:03.424 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:03.424 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:03.424 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:03.424 22:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3819583 00:40:04.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.365 Nvme0n1 : 3.00 17169.67 67.07 0.00 0.00 0.00 0.00 0.00 00:40:04.365 [2024-10-12T20:28:22.854Z] =================================================================================================================== 00:40:04.365 [2024-10-12T20:28:22.854Z] Total : 17169.67 67.07 0.00 0.00 0.00 0.00 0.00 00:40:04.365 00:40:05.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.308 Nvme0n1 : 4.00 17361.50 67.82 0.00 0.00 0.00 0.00 0.00 00:40:05.308 [2024-10-12T20:28:23.797Z] =================================================================================================================== 00:40:05.308 [2024-10-12T20:28:23.797Z] Total : 17361.50 67.82 0.00 0.00 0.00 0.00 0.00 00:40:05.308 00:40:06.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.250 Nvme0n1 : 5.00 18749.80 73.24 0.00 0.00 0.00 0.00 0.00 00:40:06.250 [2024-10-12T20:28:24.739Z] =================================================================================================================== 00:40:06.250 [2024-10-12T20:28:24.739Z] Total : 18749.80 73.24 0.00 0.00 0.00 0.00 0.00 00:40:06.250 00:40:07.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:07.192 Nvme0n1 : 6.00 19848.83 77.53 0.00 0.00 0.00 0.00 0.00 00:40:07.192 [2024-10-12T20:28:25.681Z] =================================================================================================================== 00:40:07.192 [2024-10-12T20:28:25.681Z] Total : 19848.83 77.53 0.00 0.00 0.00 0.00 0.00 00:40:07.192 00:40:08.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:08.132 Nvme0n1 : 7.00 20643.14 80.64 0.00 0.00 0.00 0.00 0.00 00:40:08.132 [2024-10-12T20:28:26.621Z] =================================================================================================================== 00:40:08.132 [2024-10-12T20:28:26.621Z] Total : 20643.14 80.64 0.00 0.00 0.00 0.00 0.00 00:40:08.132 00:40:09.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:09.073 Nvme0n1 : 8.00 21238.50 82.96 0.00 0.00 0.00 0.00 0.00 00:40:09.073 [2024-10-12T20:28:27.562Z] =================================================================================================================== 00:40:09.073 [2024-10-12T20:28:27.562Z] Total : 21238.50 82.96 0.00 0.00 0.00 0.00 0.00 00:40:09.073 00:40:10.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:10.455 Nvme0n1 : 9.00 21701.67 84.77 0.00 0.00 0.00 0.00 0.00 00:40:10.455 [2024-10-12T20:28:28.944Z] =================================================================================================================== 00:40:10.455 [2024-10-12T20:28:28.944Z] Total : 21701.67 84.77 0.00 0.00 0.00 0.00 0.00 00:40:10.455 00:40:11.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.396 Nvme0n1 : 10.00 22078.20 86.24 0.00 0.00 0.00 0.00 0.00 00:40:11.396 [2024-10-12T20:28:29.885Z] =================================================================================================================== 00:40:11.396 [2024-10-12T20:28:29.885Z] Total : 22078.20 86.24 0.00 0.00 0.00 0.00 0.00 00:40:11.396 00:40:11.396 00:40:11.396 Latency(us) 00:40:11.396 [2024-10-12T20:28:29.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:11.396 Nvme0n1 : 10.00 22077.28 86.24 0.00 0.00 5794.44 3099.31 32112.64 00:40:11.396 [2024-10-12T20:28:29.885Z] =================================================================================================================== 00:40:11.396 [2024-10-12T20:28:29.885Z] Total : 22077.28 86.24 0.00 0.00 5794.44 3099.31 32112.64 00:40:11.396 { 00:40:11.396 "results": [ 00:40:11.396 { 00:40:11.396 "job": "Nvme0n1", 00:40:11.396 "core_mask": "0x2", 00:40:11.396 "workload": "randwrite", 00:40:11.396 "status": "finished", 00:40:11.396 "queue_depth": 128, 00:40:11.396 "io_size": 4096, 00:40:11.396 "runtime": 10.003361, 00:40:11.396 "iops": 22077.279826250397, 00:40:11.396 "mibps": 86.23937432129061, 00:40:11.396 "io_failed": 0, 00:40:11.396 "io_timeout": 0, 00:40:11.396 "avg_latency_us": 5794.439749268347, 00:40:11.396 "min_latency_us": 3099.306666666667, 00:40:11.396 "max_latency_us": 32112.64 00:40:11.396 } 00:40:11.396 ], 00:40:11.396 "core_count": 1 00:40:11.396 } 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3819349 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3819349 ']' 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3819349 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3819349 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3819349' 00:40:11.396 killing process with pid 3819349 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3819349 00:40:11.396 Received shutdown signal, test time was about 10.000000 seconds 00:40:11.396 00:40:11.396 Latency(us) 00:40:11.396 [2024-10-12T20:28:29.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.396 [2024-10-12T20:28:29.885Z] =================================================================================================================== 00:40:11.396 [2024-10-12T20:28:29.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3819349 00:40:11.396 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:11.657 22:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:11.657 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:11.657 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:11.917 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:11.917 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:11.917 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:12.178 [2024-10-12 22:28:30.438230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:12.178 request: 00:40:12.178 { 00:40:12.178 "uuid": "c679575d-e04d-47fa-85b7-7dbba4c9ad59", 00:40:12.178 "method": "bdev_lvol_get_lvstores", 00:40:12.178 "req_id": 1 00:40:12.178 } 00:40:12.178 Got JSON-RPC error response 00:40:12.178 response: 00:40:12.178 { 00:40:12.178 "code": -19, 00:40:12.178 "message": "No such device" 00:40:12.178 } 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:12.178 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:12.440 aio_bdev 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9fd4473d-f162-4524-9fab-16ee5ce0fcef 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=9fd4473d-f162-4524-9fab-16ee5ce0fcef 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:12.440 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:12.701 22:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9fd4473d-f162-4524-9fab-16ee5ce0fcef -t 2000 00:40:12.701 [ 00:40:12.701 { 00:40:12.701 "name": "9fd4473d-f162-4524-9fab-16ee5ce0fcef", 00:40:12.701 "aliases": [ 00:40:12.701 "lvs/lvol" 00:40:12.701 ], 00:40:12.701 "product_name": "Logical Volume", 00:40:12.701 "block_size": 4096, 00:40:12.701 "num_blocks": 38912, 00:40:12.701 "uuid": "9fd4473d-f162-4524-9fab-16ee5ce0fcef", 00:40:12.701 "assigned_rate_limits": { 00:40:12.701 "rw_ios_per_sec": 0, 00:40:12.701 "rw_mbytes_per_sec": 0, 00:40:12.701 "r_mbytes_per_sec": 0, 00:40:12.701 "w_mbytes_per_sec": 0 00:40:12.701 }, 00:40:12.701 "claimed": false, 00:40:12.701 "zoned": false, 00:40:12.701 "supported_io_types": { 00:40:12.701 "read": true, 00:40:12.701 "write": true, 00:40:12.701 "unmap": true, 00:40:12.701 "flush": false, 00:40:12.701 "reset": true, 00:40:12.701 "nvme_admin": false, 00:40:12.701 "nvme_io": false, 00:40:12.701 "nvme_io_md": false, 00:40:12.701 "write_zeroes": true, 00:40:12.701 "zcopy": false, 00:40:12.701 "get_zone_info": false, 00:40:12.701 "zone_management": false, 00:40:12.701 "zone_append": false, 00:40:12.701 "compare": false, 00:40:12.701 "compare_and_write": false, 00:40:12.701 "abort": false, 00:40:12.701 "seek_hole": true, 00:40:12.701 "seek_data": true, 00:40:12.701 "copy": false, 00:40:12.701 "nvme_iov_md": false 00:40:12.701 }, 00:40:12.701 "driver_specific": { 00:40:12.701 "lvol": { 00:40:12.701 "lvol_store_uuid": "c679575d-e04d-47fa-85b7-7dbba4c9ad59", 00:40:12.701 "base_bdev": "aio_bdev", 00:40:12.701 "thin_provision": false, 00:40:12.701 "num_allocated_clusters": 38, 00:40:12.701 "snapshot": false, 00:40:12.701 "clone": false, 00:40:12.701 "esnap_clone": false 00:40:12.701 } 00:40:12.701 } 00:40:12.701 } 00:40:12.701 ] 00:40:12.701 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:40:12.701 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:12.701 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:12.961 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:12.961 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:12.961 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:13.221 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:13.221 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9fd4473d-f162-4524-9fab-16ee5ce0fcef 00:40:13.221 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c679575d-e04d-47fa-85b7-7dbba4c9ad59 00:40:13.481 22:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:13.742 00:40:13.742 real 0m15.929s 00:40:13.742 user 0m15.628s 00:40:13.742 sys 0m1.431s 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:13.742 ************************************ 00:40:13.742 END TEST lvs_grow_clean 00:40:13.742 ************************************ 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:13.742 ************************************ 00:40:13.742 START TEST lvs_grow_dirty 00:40:13.742 ************************************ 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:13.742 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:14.003 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:14.003 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:14.263 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 lvol 150 00:40:14.523 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=817cb1aa-3fb9-46ca-8cad-962828475201 00:40:14.523 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:14.523 22:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:14.783 [2024-10-12 22:28:33.050152] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:14.783 [2024-10-12 22:28:33.050301] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:14.783 true 00:40:14.783 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:14.783 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:14.783 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:14.783 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:15.043 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 817cb1aa-3fb9-46ca-8cad-962828475201 00:40:15.304 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:15.304 [2024-10-12 22:28:33.694637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:15.304 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3822324 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3822324 /var/tmp/bdevperf.sock 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3822324 ']' 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:15.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:15.564 22:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:15.564 [2024-10-12 22:28:33.932893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:15.564 [2024-10-12 22:28:33.932952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822324 ] 00:40:15.564 [2024-10-12 22:28:34.009660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.564 [2024-10-12 22:28:34.038935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.504 22:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:16.504 22:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:16.504 22:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:16.504 Nvme0n1 00:40:16.504 22:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:16.765 [ 00:40:16.765 { 00:40:16.765 "name": "Nvme0n1", 00:40:16.765 "aliases": [ 00:40:16.765 "817cb1aa-3fb9-46ca-8cad-962828475201" 00:40:16.765 ], 00:40:16.765 "product_name": "NVMe disk", 00:40:16.765 "block_size": 4096, 00:40:16.765 "num_blocks": 38912, 00:40:16.765 "uuid": "817cb1aa-3fb9-46ca-8cad-962828475201", 00:40:16.765 "numa_id": 0, 00:40:16.765 "assigned_rate_limits": { 00:40:16.765 "rw_ios_per_sec": 0, 00:40:16.765 "rw_mbytes_per_sec": 0, 00:40:16.765 "r_mbytes_per_sec": 0, 00:40:16.765 "w_mbytes_per_sec": 0 00:40:16.765 }, 00:40:16.765 "claimed": false, 00:40:16.765 "zoned": false, 00:40:16.765 "supported_io_types": { 00:40:16.765 "read": true, 00:40:16.765 "write": true, 00:40:16.765 "unmap": true, 00:40:16.765 "flush": true, 00:40:16.765 "reset": true, 00:40:16.765 "nvme_admin": true, 00:40:16.765 "nvme_io": true, 00:40:16.765 "nvme_io_md": false, 00:40:16.765 "write_zeroes": true, 00:40:16.765 "zcopy": false, 00:40:16.765 "get_zone_info": false, 00:40:16.765 "zone_management": false, 00:40:16.765 "zone_append": false, 00:40:16.765 "compare": true, 00:40:16.765 "compare_and_write": true, 00:40:16.765 "abort": true, 00:40:16.765 "seek_hole": false, 00:40:16.765 "seek_data": false, 00:40:16.765 "copy": true, 00:40:16.765 "nvme_iov_md": false 00:40:16.765 }, 00:40:16.765 "memory_domains": [ 00:40:16.765 { 00:40:16.765 "dma_device_id": "system", 00:40:16.765 "dma_device_type": 1 00:40:16.765 } 00:40:16.765 ], 00:40:16.765 "driver_specific": { 00:40:16.765 "nvme": [ 00:40:16.765 { 00:40:16.765 "trid": { 00:40:16.765 "trtype": "TCP", 00:40:16.765 "adrfam": "IPv4", 00:40:16.765 "traddr": "10.0.0.2", 00:40:16.765 "trsvcid": "4420", 00:40:16.765 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:16.765 }, 00:40:16.765 "ctrlr_data": { 00:40:16.765 "cntlid": 1, 00:40:16.765 "vendor_id": "0x8086", 00:40:16.765 "model_number": "SPDK bdev Controller", 00:40:16.765 "serial_number": "SPDK0", 00:40:16.765 "firmware_revision": "24.09.1", 00:40:16.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:16.765 "oacs": { 00:40:16.765 "security": 0, 00:40:16.765 "format": 0, 00:40:16.765 "firmware": 0, 00:40:16.765 "ns_manage": 0 00:40:16.765 }, 00:40:16.765 "multi_ctrlr": true, 00:40:16.765 "ana_reporting": false 00:40:16.765 }, 00:40:16.765 "vs": { 00:40:16.765 "nvme_version": "1.3" 00:40:16.765 }, 00:40:16.765 "ns_data": { 00:40:16.765 "id": 1, 00:40:16.765 "can_share": true 00:40:16.765 } 00:40:16.765 } 00:40:16.765 ], 00:40:16.765 "mp_policy": "active_passive" 00:40:16.765 } 00:40:16.765 } 00:40:16.765 ] 00:40:16.765 22:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:16.765 22:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3822659 00:40:16.765 22:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:16.765 Running I/O for 10 seconds... 00:40:18.150 Latency(us) 00:40:18.150 [2024-10-12T20:28:36.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.151 Nvme0n1 : 1.00 17405.00 67.99 0.00 0.00 0.00 0.00 0.00 00:40:18.151 [2024-10-12T20:28:36.640Z] =================================================================================================================== 00:40:18.151 [2024-10-12T20:28:36.640Z] Total : 17405.00 67.99 0.00 0.00 0.00 0.00 0.00 00:40:18.151 00:40:18.724 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:18.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:18.986 Nvme0n1 : 2.00 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:40:18.986 [2024-10-12T20:28:37.475Z] =================================================================================================================== 00:40:18.986 [2024-10-12T20:28:37.475Z] Total : 17662.50 68.99 0.00 0.00 0.00 0.00 0.00 00:40:18.986 00:40:18.986 true 00:40:18.986 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:18.986 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:19.247 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:19.247 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:19.247 22:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3822659 00:40:19.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:19.819 Nvme0n1 : 3.00 17748.00 69.33 0.00 0.00 0.00 0.00 0.00 00:40:19.819 [2024-10-12T20:28:38.308Z] =================================================================================================================== 00:40:19.819 [2024-10-12T20:28:38.308Z] Total : 17748.00 69.33 0.00 0.00 0.00 0.00 0.00 00:40:19.819 00:40:20.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:20.761 Nvme0n1 : 4.00 17807.00 69.56 0.00 0.00 0.00 0.00 0.00 00:40:20.761 [2024-10-12T20:28:39.250Z] =================================================================================================================== 00:40:20.761 [2024-10-12T20:28:39.250Z] Total : 17807.00 69.56 0.00 0.00 0.00 0.00 0.00 00:40:20.761 00:40:22.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:22.148 Nvme0n1 : 5.00 18058.60 70.54 0.00 0.00 0.00 0.00 0.00 00:40:22.148 [2024-10-12T20:28:40.637Z] =================================================================================================================== 00:40:22.148 [2024-10-12T20:28:40.637Z] Total : 18058.60 70.54 0.00 0.00 0.00 0.00 0.00 00:40:22.148 00:40:23.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:23.091 Nvme0n1 : 6.00 19272.83 75.28 0.00 0.00 0.00 0.00 0.00 00:40:23.091 [2024-10-12T20:28:41.580Z] =================================================================================================================== 00:40:23.091 [2024-10-12T20:28:41.580Z] Total : 19272.83 75.28 0.00 0.00 0.00 0.00 0.00 00:40:23.091 00:40:24.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:24.034 Nvme0n1 : 7.00 20149.43 78.71 0.00 0.00 0.00 0.00 0.00 00:40:24.034 [2024-10-12T20:28:42.523Z] =================================================================================================================== 00:40:24.034 [2024-10-12T20:28:42.523Z] Total : 20149.43 78.71 0.00 0.00 0.00 0.00 0.00 00:40:24.034 00:40:24.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:24.978 Nvme0n1 : 8.00 20808.75 81.28 0.00 0.00 0.00 0.00 0.00 00:40:24.978 [2024-10-12T20:28:43.467Z] =================================================================================================================== 00:40:24.978 [2024-10-12T20:28:43.467Z] Total : 20808.75 81.28 0.00 0.00 0.00 0.00 0.00 00:40:24.978 00:40:25.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:25.920 Nvme0n1 : 9.00 21324.89 83.30 0.00 0.00 0.00 0.00 0.00 00:40:25.920 [2024-10-12T20:28:44.409Z] =================================================================================================================== 00:40:25.920 [2024-10-12T20:28:44.409Z] Total : 21324.89 83.30 0.00 0.00 0.00 0.00 0.00 00:40:25.920 00:40:26.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:26.950 Nvme0n1 : 10.00 21739.30 84.92 0.00 0.00 0.00 0.00 0.00 00:40:26.950 [2024-10-12T20:28:45.439Z] =================================================================================================================== 00:40:26.950 [2024-10-12T20:28:45.439Z] Total : 21739.30 84.92 0.00 0.00 0.00 0.00 0.00 00:40:26.950 00:40:26.950 00:40:26.950 Latency(us) 00:40:26.950 [2024-10-12T20:28:45.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:26.951 Nvme0n1 : 10.00 21739.02 84.92 0.00 0.00 5884.82 3181.23 31020.37 00:40:26.951 [2024-10-12T20:28:45.440Z] =================================================================================================================== 00:40:26.951 [2024-10-12T20:28:45.440Z] Total : 21739.02 84.92 0.00 0.00 5884.82 3181.23 31020.37 00:40:26.951 { 00:40:26.951 "results": [ 00:40:26.951 { 00:40:26.951 "job": "Nvme0n1", 00:40:26.951 "core_mask": "0x2", 00:40:26.951 "workload": "randwrite", 00:40:26.951 "status": "finished", 00:40:26.951 "queue_depth": 128, 00:40:26.951 "io_size": 4096, 00:40:26.951 "runtime": 10.003029, 00:40:26.951 "iops": 21739.015252280085, 00:40:26.951 "mibps": 84.91802832921908, 00:40:26.951 "io_failed": 0, 00:40:26.951 "io_timeout": 0, 00:40:26.951 "avg_latency_us": 5884.819981850735, 00:40:26.951 "min_latency_us": 3181.2266666666665, 00:40:26.951 "max_latency_us": 31020.373333333333 00:40:26.951 } 00:40:26.951 ], 00:40:26.951 "core_count": 1 00:40:26.951 } 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3822324 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3822324 ']' 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3822324 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3822324 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3822324' 00:40:26.951 killing process with pid 3822324 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3822324 00:40:26.951 Received shutdown signal, test time was about 10.000000 seconds 00:40:26.951 00:40:26.951 Latency(us) 00:40:26.951 [2024-10-12T20:28:45.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.951 [2024-10-12T20:28:45.440Z] =================================================================================================================== 00:40:26.951 [2024-10-12T20:28:45.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:26.951 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3822324 00:40:27.245 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:27.245 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3818872 00:40:27.512 22:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3818872 00:40:27.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3818872 Killed "${NVMF_APP[@]}" "$@" 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3824679 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3824679 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3824679 ']' 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:27.773 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.774 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:27.774 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:27.774 [2024-10-12 22:28:46.088298] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:27.774 [2024-10-12 22:28:46.089289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:27.774 [2024-10-12 22:28:46.089328] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:27.774 [2024-10-12 22:28:46.171438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:27.774 [2024-10-12 22:28:46.199497] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:27.774 [2024-10-12 22:28:46.199527] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:27.774 [2024-10-12 22:28:46.199533] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:27.774 [2024-10-12 22:28:46.199542] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:27.774 [2024-10-12 22:28:46.199546] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:27.774 [2024-10-12 22:28:46.199563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.774 [2024-10-12 22:28:46.243558] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:27.774 [2024-10-12 22:28:46.243754] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:28.717 22:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:28.717 [2024-10-12 22:28:47.113977] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:28.717 [2024-10-12 22:28:47.114326] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:28.717 [2024-10-12 22:28:47.114422] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 817cb1aa-3fb9-46ca-8cad-962828475201 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=817cb1aa-3fb9-46ca-8cad-962828475201 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:28.717 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:28.978 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 817cb1aa-3fb9-46ca-8cad-962828475201 -t 2000 00:40:29.238 [ 00:40:29.238 { 00:40:29.238 "name": "817cb1aa-3fb9-46ca-8cad-962828475201", 00:40:29.238 "aliases": [ 00:40:29.238 "lvs/lvol" 00:40:29.238 ], 00:40:29.238 "product_name": "Logical Volume", 00:40:29.238 "block_size": 4096, 00:40:29.238 "num_blocks": 38912, 00:40:29.238 "uuid": "817cb1aa-3fb9-46ca-8cad-962828475201", 00:40:29.238 "assigned_rate_limits": { 00:40:29.238 "rw_ios_per_sec": 0, 00:40:29.238 "rw_mbytes_per_sec": 0, 00:40:29.238 "r_mbytes_per_sec": 0, 00:40:29.238 "w_mbytes_per_sec": 0 00:40:29.238 }, 00:40:29.238 "claimed": false, 00:40:29.238 "zoned": false, 00:40:29.238 "supported_io_types": { 00:40:29.238 "read": true, 00:40:29.238 "write": true, 00:40:29.238 "unmap": true, 00:40:29.238 "flush": false, 00:40:29.238 "reset": true, 00:40:29.238 "nvme_admin": false, 00:40:29.238 "nvme_io": false, 00:40:29.238 "nvme_io_md": false, 00:40:29.238 "write_zeroes": true, 00:40:29.238 "zcopy": false, 00:40:29.238 "get_zone_info": false, 00:40:29.238 "zone_management": false, 00:40:29.238 "zone_append": false, 00:40:29.238 "compare": false, 00:40:29.238 "compare_and_write": false, 00:40:29.238 "abort": false, 00:40:29.238 "seek_hole": true, 00:40:29.238 "seek_data": true, 00:40:29.238 "copy": false, 00:40:29.238 "nvme_iov_md": false 00:40:29.238 }, 00:40:29.238 "driver_specific": { 00:40:29.238 "lvol": { 00:40:29.238 "lvol_store_uuid": "39b49229-85fb-44d8-8447-8c15ea1a0a84", 00:40:29.238 "base_bdev": "aio_bdev", 00:40:29.238 "thin_provision": false, 00:40:29.238 "num_allocated_clusters": 38, 00:40:29.238 "snapshot": false, 00:40:29.238 "clone": false, 00:40:29.238 "esnap_clone": false 00:40:29.238 } 00:40:29.238 } 00:40:29.238 } 00:40:29.238 ] 00:40:29.238 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:29.238 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:29.238 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:29.238 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:29.238 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:29.239 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:29.499 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:29.499 22:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:29.499 [2024-10-12 22:28:47.984027] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:29.759 request: 00:40:29.759 { 00:40:29.759 "uuid": "39b49229-85fb-44d8-8447-8c15ea1a0a84", 00:40:29.759 "method": "bdev_lvol_get_lvstores", 00:40:29.759 "req_id": 1 00:40:29.759 } 00:40:29.759 Got JSON-RPC error response 00:40:29.759 response: 00:40:29.759 { 00:40:29.759 "code": -19, 00:40:29.759 "message": "No such device" 00:40:29.759 } 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:29.759 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:30.020 aio_bdev 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 817cb1aa-3fb9-46ca-8cad-962828475201 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=817cb1aa-3fb9-46ca-8cad-962828475201 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:30.020 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 817cb1aa-3fb9-46ca-8cad-962828475201 -t 2000 00:40:30.280 [ 00:40:30.280 { 00:40:30.280 "name": "817cb1aa-3fb9-46ca-8cad-962828475201", 00:40:30.280 "aliases": [ 00:40:30.280 "lvs/lvol" 00:40:30.280 ], 00:40:30.280 "product_name": "Logical Volume", 00:40:30.280 "block_size": 4096, 00:40:30.280 "num_blocks": 38912, 00:40:30.280 "uuid": "817cb1aa-3fb9-46ca-8cad-962828475201", 00:40:30.280 "assigned_rate_limits": { 00:40:30.280 "rw_ios_per_sec": 0, 00:40:30.280 "rw_mbytes_per_sec": 0, 00:40:30.280 "r_mbytes_per_sec": 0, 00:40:30.280 "w_mbytes_per_sec": 0 00:40:30.280 }, 00:40:30.280 "claimed": false, 00:40:30.280 "zoned": false, 00:40:30.280 "supported_io_types": { 00:40:30.280 "read": true, 00:40:30.280 "write": true, 00:40:30.280 "unmap": true, 00:40:30.280 "flush": false, 00:40:30.280 "reset": true, 00:40:30.280 "nvme_admin": false, 00:40:30.280 "nvme_io": false, 00:40:30.280 "nvme_io_md": false, 00:40:30.280 "write_zeroes": true, 00:40:30.280 "zcopy": false, 00:40:30.280 "get_zone_info": false, 00:40:30.280 "zone_management": false, 00:40:30.280 "zone_append": false, 00:40:30.280 "compare": false, 00:40:30.280 "compare_and_write": false, 00:40:30.280 "abort": false, 00:40:30.280 "seek_hole": true, 00:40:30.280 "seek_data": true, 00:40:30.280 "copy": false, 00:40:30.280 "nvme_iov_md": false 00:40:30.280 }, 00:40:30.280 "driver_specific": { 00:40:30.280 "lvol": { 00:40:30.280 "lvol_store_uuid": "39b49229-85fb-44d8-8447-8c15ea1a0a84", 00:40:30.280 "base_bdev": "aio_bdev", 00:40:30.280 "thin_provision": false, 00:40:30.280 "num_allocated_clusters": 38, 00:40:30.280 "snapshot": false, 00:40:30.280 "clone": false, 00:40:30.280 "esnap_clone": false 00:40:30.280 } 00:40:30.280 } 00:40:30.280 } 00:40:30.280 ] 00:40:30.280 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:40:30.280 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:30.280 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:30.542 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:30.542 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:30.542 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:30.542 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:30.542 22:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 817cb1aa-3fb9-46ca-8cad-962828475201 00:40:30.803 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39b49229-85fb-44d8-8447-8c15ea1a0a84 00:40:31.063 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:31.063 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:31.325 00:40:31.325 real 0m17.410s 00:40:31.325 user 0m35.120s 00:40:31.325 sys 0m3.120s 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:31.325 ************************************ 00:40:31.325 END TEST lvs_grow_dirty 00:40:31.325 ************************************ 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:31.325 nvmf_trace.0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.325 rmmod nvme_tcp 00:40:31.325 rmmod nvme_fabrics 00:40:31.325 rmmod nvme_keyring 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3824679 ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3824679 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3824679 ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3824679 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3824679 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3824679' 00:40:31.325 killing process with pid 3824679 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3824679 00:40:31.325 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3824679 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:31.586 22:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.135 22:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.135 00:40:34.135 real 0m43.894s 00:40:34.135 user 0m53.510s 00:40:34.135 sys 0m10.559s 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:34.135 ************************************ 00:40:34.135 END TEST nvmf_lvs_grow 00:40:34.135 ************************************ 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:34.135 ************************************ 00:40:34.135 START TEST nvmf_bdev_io_wait 00:40:34.135 ************************************ 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:34.135 * Looking for test storage... 00:40:34.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:34.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.135 --rc genhtml_branch_coverage=1 00:40:34.135 --rc genhtml_function_coverage=1 00:40:34.135 --rc genhtml_legend=1 00:40:34.135 --rc geninfo_all_blocks=1 00:40:34.135 --rc geninfo_unexecuted_blocks=1 00:40:34.135 00:40:34.135 ' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:34.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.135 --rc genhtml_branch_coverage=1 00:40:34.135 --rc genhtml_function_coverage=1 00:40:34.135 --rc genhtml_legend=1 00:40:34.135 --rc geninfo_all_blocks=1 00:40:34.135 --rc geninfo_unexecuted_blocks=1 00:40:34.135 00:40:34.135 ' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:34.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.135 --rc genhtml_branch_coverage=1 00:40:34.135 --rc genhtml_function_coverage=1 00:40:34.135 --rc genhtml_legend=1 00:40:34.135 --rc geninfo_all_blocks=1 00:40:34.135 --rc geninfo_unexecuted_blocks=1 00:40:34.135 00:40:34.135 ' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:34.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.135 --rc genhtml_branch_coverage=1 00:40:34.135 --rc genhtml_function_coverage=1 00:40:34.135 --rc genhtml_legend=1 00:40:34.135 --rc geninfo_all_blocks=1 00:40:34.135 --rc geninfo_unexecuted_blocks=1 00:40:34.135 00:40:34.135 ' 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.135 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.136 22:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:42.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:42.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:42.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:42.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:42.281 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:42.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:42.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:40:42.282 00:40:42.282 --- 10.0.0.2 ping statistics --- 00:40:42.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.282 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:42.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:42.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:40:42.282 00:40:42.282 --- 10.0.0.1 ping statistics --- 00:40:42.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.282 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3829514 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3829514 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3829514 ']' 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:42.282 22:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.282 [2024-10-12 22:28:59.699976] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:42.282 [2024-10-12 22:28:59.700927] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:42.282 [2024-10-12 22:28:59.700963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:42.282 [2024-10-12 22:28:59.782639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:42.282 [2024-10-12 22:28:59.816033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:42.282 [2024-10-12 22:28:59.816069] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:42.282 [2024-10-12 22:28:59.816078] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:42.282 [2024-10-12 22:28:59.816084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:42.282 [2024-10-12 22:28:59.816090] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:42.282 [2024-10-12 22:28:59.816228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:42.282 [2024-10-12 22:28:59.816347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:40:42.282 [2024-10-12 22:28:59.816459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.282 [2024-10-12 22:28:59.816461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:40:42.282 [2024-10-12 22:28:59.816907] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.282 [2024-10-12 22:29:00.628237] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:42.282 [2024-10-12 22:29:00.629024] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:42.282 [2024-10-12 22:29:00.629333] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:42.282 [2024-10-12 22:29:00.629500] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.282 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.283 [2024-10-12 22:29:00.641442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.283 Malloc0 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:42.283 [2024-10-12 22:29:00.725732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3829770 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3829773 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.283 { 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme$subsystem", 00:40:42.283 "trtype": "$TEST_TRANSPORT", 00:40:42.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "$NVMF_PORT", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.283 "hdgst": ${hdgst:-false}, 00:40:42.283 "ddgst": ${ddgst:-false} 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 } 00:40:42.283 EOF 00:40:42.283 )") 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3829775 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3829778 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.283 { 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme$subsystem", 00:40:42.283 "trtype": "$TEST_TRANSPORT", 00:40:42.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "$NVMF_PORT", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.283 "hdgst": ${hdgst:-false}, 00:40:42.283 "ddgst": ${ddgst:-false} 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 } 00:40:42.283 EOF 00:40:42.283 )") 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.283 { 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme$subsystem", 00:40:42.283 "trtype": "$TEST_TRANSPORT", 00:40:42.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "$NVMF_PORT", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.283 "hdgst": ${hdgst:-false}, 00:40:42.283 "ddgst": ${ddgst:-false} 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 } 00:40:42.283 EOF 00:40:42.283 )") 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:40:42.283 { 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme$subsystem", 00:40:42.283 "trtype": "$TEST_TRANSPORT", 00:40:42.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "$NVMF_PORT", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:42.283 "hdgst": ${hdgst:-false}, 00:40:42.283 "ddgst": ${ddgst:-false} 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 } 00:40:42.283 EOF 00:40:42.283 )") 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3829770 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme1", 00:40:42.283 "trtype": "tcp", 00:40:42.283 "traddr": "10.0.0.2", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "4420", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:42.283 "hdgst": false, 00:40:42.283 "ddgst": false 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 }' 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme1", 00:40:42.283 "trtype": "tcp", 00:40:42.283 "traddr": "10.0.0.2", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "4420", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:42.283 "hdgst": false, 00:40:42.283 "ddgst": false 00:40:42.283 }, 00:40:42.283 "method": "bdev_nvme_attach_controller" 00:40:42.283 }' 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:42.283 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:42.283 "params": { 00:40:42.283 "name": "Nvme1", 00:40:42.283 "trtype": "tcp", 00:40:42.283 "traddr": "10.0.0.2", 00:40:42.283 "adrfam": "ipv4", 00:40:42.283 "trsvcid": "4420", 00:40:42.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:42.283 "hdgst": false, 00:40:42.284 "ddgst": false 00:40:42.284 }, 00:40:42.284 "method": "bdev_nvme_attach_controller" 00:40:42.284 }' 00:40:42.284 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:40:42.284 22:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:40:42.284 "params": { 00:40:42.284 "name": "Nvme1", 00:40:42.284 "trtype": "tcp", 00:40:42.284 "traddr": "10.0.0.2", 00:40:42.284 "adrfam": "ipv4", 00:40:42.284 "trsvcid": "4420", 00:40:42.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:42.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:42.284 "hdgst": false, 00:40:42.284 "ddgst": false 00:40:42.284 }, 00:40:42.284 "method": "bdev_nvme_attach_controller" 00:40:42.284 }' 00:40:42.545 [2024-10-12 22:29:00.787544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:42.545 [2024-10-12 22:29:00.787616] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:42.545 [2024-10-12 22:29:00.792891] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:42.545 [2024-10-12 22:29:00.792963] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:42.545 [2024-10-12 22:29:00.796096] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:42.545 [2024-10-12 22:29:00.796100] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:42.545 [2024-10-12 22:29:00.796168] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-10-12 22:29:00.796172] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:40:42.545 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:42.545 [2024-10-12 22:29:00.988791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.545 [2024-10-12 22:29:01.018003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:40:42.806 [2024-10-12 22:29:01.077940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.806 [2024-10-12 22:29:01.099699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:40:42.806 [2024-10-12 22:29:01.131679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.806 [2024-10-12 22:29:01.150549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:40:42.806 [2024-10-12 22:29:01.178666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.806 [2024-10-12 22:29:01.195918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:40:43.066 Running I/O for 1 seconds... 00:40:43.066 Running I/O for 1 seconds... 00:40:43.066 Running I/O for 1 seconds... 00:40:43.066 Running I/O for 1 seconds... 00:40:44.008 14244.00 IOPS, 55.64 MiB/s 00:40:44.008 Latency(us) 00:40:44.008 [2024-10-12T20:29:02.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.008 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:44.008 Nvme1n1 : 1.01 14301.11 55.86 0.00 0.00 8923.48 4423.68 10977.28 00:40:44.008 [2024-10-12T20:29:02.497Z] =================================================================================================================== 00:40:44.008 [2024-10-12T20:29:02.497Z] Total : 14301.11 55.86 0.00 0.00 8923.48 4423.68 10977.28 00:40:44.008 11253.00 IOPS, 43.96 MiB/s 00:40:44.008 Latency(us) 00:40:44.008 [2024-10-12T20:29:02.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.008 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:44.008 Nvme1n1 : 1.01 11303.67 44.15 0.00 0.00 11282.11 5106.35 15182.51 00:40:44.008 [2024-10-12T20:29:02.497Z] =================================================================================================================== 00:40:44.008 [2024-10-12T20:29:02.497Z] Total : 11303.67 44.15 0.00 0.00 11282.11 5106.35 15182.51 00:40:44.008 11522.00 IOPS, 45.01 MiB/s 00:40:44.008 Latency(us) 00:40:44.008 [2024-10-12T20:29:02.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.008 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:44.008 Nvme1n1 : 1.01 11600.02 45.31 0.00 0.00 10997.07 2375.68 17476.27 00:40:44.008 [2024-10-12T20:29:02.497Z] =================================================================================================================== 00:40:44.008 [2024-10-12T20:29:02.497Z] Total : 11600.02 45.31 0.00 0.00 10997.07 2375.68 17476.27 00:40:44.270 183880.00 IOPS, 718.28 MiB/s 00:40:44.270 Latency(us) 00:40:44.270 [2024-10-12T20:29:02.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:44.270 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:44.270 Nvme1n1 : 1.00 183511.80 716.84 0.00 0.00 693.57 308.91 1979.73 00:40:44.270 [2024-10-12T20:29:02.759Z] =================================================================================================================== 00:40:44.270 [2024-10-12T20:29:02.759Z] Total : 183511.80 716.84 0.00 0.00 693.57 308.91 1979.73 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3829773 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3829775 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3829778 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.270 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.270 rmmod nvme_tcp 00:40:44.270 rmmod nvme_fabrics 00:40:44.532 rmmod nvme_keyring 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3829514 ']' 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3829514 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3829514 ']' 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3829514 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3829514 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3829514' 00:40:44.532 killing process with pid 3829514 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3829514 00:40:44.532 22:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3829514 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:44.795 22:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.709 00:40:46.709 real 0m13.015s 00:40:46.709 user 0m16.027s 00:40:46.709 sys 0m7.626s 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:46.709 ************************************ 00:40:46.709 END TEST nvmf_bdev_io_wait 00:40:46.709 ************************************ 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:46.709 ************************************ 00:40:46.709 START TEST nvmf_queue_depth 00:40:46.709 ************************************ 00:40:46.709 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:40:46.972 * Looking for test storage... 00:40:46.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.972 --rc genhtml_branch_coverage=1 00:40:46.972 --rc genhtml_function_coverage=1 00:40:46.972 --rc genhtml_legend=1 00:40:46.972 --rc geninfo_all_blocks=1 00:40:46.972 --rc geninfo_unexecuted_blocks=1 00:40:46.972 00:40:46.972 ' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.972 --rc genhtml_branch_coverage=1 00:40:46.972 --rc genhtml_function_coverage=1 00:40:46.972 --rc genhtml_legend=1 00:40:46.972 --rc geninfo_all_blocks=1 00:40:46.972 --rc geninfo_unexecuted_blocks=1 00:40:46.972 00:40:46.972 ' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.972 --rc genhtml_branch_coverage=1 00:40:46.972 --rc genhtml_function_coverage=1 00:40:46.972 --rc genhtml_legend=1 00:40:46.972 --rc geninfo_all_blocks=1 00:40:46.972 --rc geninfo_unexecuted_blocks=1 00:40:46.972 00:40:46.972 ' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.972 --rc genhtml_branch_coverage=1 00:40:46.972 --rc genhtml_function_coverage=1 00:40:46.972 --rc genhtml_legend=1 00:40:46.972 --rc geninfo_all_blocks=1 00:40:46.972 --rc geninfo_unexecuted_blocks=1 00:40:46.972 00:40:46.972 ' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.972 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.973 22:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:55.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:55.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.119 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:55.120 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:55.120 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:55.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:55.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:40:55.120 00:40:55.120 --- 10.0.0.2 ping statistics --- 00:40:55.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.120 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:55.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:55.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:40:55.120 00:40:55.120 --- 10.0.0.1 ping statistics --- 00:40:55.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:55.120 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3834292 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3834292 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3834292 ']' 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:55.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:55.120 22:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.120 [2024-10-12 22:29:12.682839] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:55.120 [2024-10-12 22:29:12.683791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:55.120 [2024-10-12 22:29:12.683826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:55.120 [2024-10-12 22:29:12.768905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.120 [2024-10-12 22:29:12.799503] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:55.120 [2024-10-12 22:29:12.799536] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:55.120 [2024-10-12 22:29:12.799544] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:55.120 [2024-10-12 22:29:12.799551] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:55.120 [2024-10-12 22:29:12.799557] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:55.120 [2024-10-12 22:29:12.799576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.120 [2024-10-12 22:29:12.847348] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:55.120 [2024-10-12 22:29:12.847598] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.120 [2024-10-12 22:29:13.532346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.120 Malloc0 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:55.120 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.121 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.385 [2024-10-12 22:29:13.612371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3834484 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3834484 /var/tmp/bdevperf.sock 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3834484 ']' 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:55.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:55.385 22:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:55.385 [2024-10-12 22:29:13.669512] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:55.385 [2024-10-12 22:29:13.669576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834484 ] 00:40:55.385 [2024-10-12 22:29:13.750957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.385 [2024-10-12 22:29:13.797593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:56.328 NVMe0n1 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:56.328 22:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:56.328 Running I/O for 10 seconds... 00:40:58.658 9216.00 IOPS, 36.00 MiB/s [2024-10-12T20:29:18.087Z] 9219.50 IOPS, 36.01 MiB/s [2024-10-12T20:29:19.029Z] 10242.00 IOPS, 40.01 MiB/s [2024-10-12T20:29:19.970Z] 11094.00 IOPS, 43.34 MiB/s [2024-10-12T20:29:20.910Z] 11677.60 IOPS, 45.62 MiB/s [2024-10-12T20:29:21.851Z] 12040.17 IOPS, 47.03 MiB/s [2024-10-12T20:29:22.791Z] 12308.00 IOPS, 48.08 MiB/s [2024-10-12T20:29:24.174Z] 12557.12 IOPS, 49.05 MiB/s [2024-10-12T20:29:25.115Z] 12748.11 IOPS, 49.80 MiB/s [2024-10-12T20:29:25.115Z] 12898.90 IOPS, 50.39 MiB/s 00:41:06.626 Latency(us) 00:41:06.626 [2024-10-12T20:29:25.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:06.626 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:06.626 Verification LBA range: start 0x0 length 0x4000 00:41:06.626 NVMe0n1 : 10.06 12923.11 50.48 0.00 0.00 78953.72 23920.64 65536.00 00:41:06.626 [2024-10-12T20:29:25.115Z] =================================================================================================================== 00:41:06.626 [2024-10-12T20:29:25.116Z] Total : 12923.11 50.48 0.00 0.00 78953.72 23920.64 65536.00 00:41:06.627 { 00:41:06.627 "results": [ 00:41:06.627 { 00:41:06.627 "job": "NVMe0n1", 00:41:06.627 "core_mask": "0x1", 00:41:06.627 "workload": "verify", 00:41:06.627 "status": "finished", 00:41:06.627 "verify_range": { 00:41:06.627 "start": 0, 00:41:06.627 "length": 16384 00:41:06.627 }, 00:41:06.627 "queue_depth": 1024, 00:41:06.627 "io_size": 4096, 00:41:06.627 "runtime": 10.060506, 00:41:06.627 "iops": 12923.107446086708, 00:41:06.627 "mibps": 50.4808884612762, 00:41:06.627 "io_failed": 0, 00:41:06.627 "io_timeout": 0, 00:41:06.627 "avg_latency_us": 78953.72220644602, 00:41:06.627 "min_latency_us": 23920.64, 00:41:06.627 "max_latency_us": 65536.0 00:41:06.627 } 00:41:06.627 ], 00:41:06.627 "core_count": 1 00:41:06.627 } 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3834484 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3834484 ']' 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3834484 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3834484 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3834484' 00:41:06.627 killing process with pid 3834484 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3834484 00:41:06.627 Received shutdown signal, test time was about 10.000000 seconds 00:41:06.627 00:41:06.627 Latency(us) 00:41:06.627 [2024-10-12T20:29:25.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:06.627 [2024-10-12T20:29:25.116Z] =================================================================================================================== 00:41:06.627 [2024-10-12T20:29:25.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:06.627 22:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3834484 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:06.627 rmmod nvme_tcp 00:41:06.627 rmmod nvme_fabrics 00:41:06.627 rmmod nvme_keyring 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3834292 ']' 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3834292 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3834292 ']' 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3834292 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:06.627 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3834292 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3834292' 00:41:06.888 killing process with pid 3834292 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3834292 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3834292 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:06.888 22:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:09.431 00:41:09.431 real 0m22.177s 00:41:09.431 user 0m24.590s 00:41:09.431 sys 0m7.177s 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:09.431 ************************************ 00:41:09.431 END TEST nvmf_queue_depth 00:41:09.431 ************************************ 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:09.431 ************************************ 00:41:09.431 START TEST nvmf_target_multipath 00:41:09.431 ************************************ 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:09.431 * Looking for test storage... 00:41:09.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.431 --rc genhtml_branch_coverage=1 00:41:09.431 --rc genhtml_function_coverage=1 00:41:09.431 --rc genhtml_legend=1 00:41:09.431 --rc geninfo_all_blocks=1 00:41:09.431 --rc geninfo_unexecuted_blocks=1 00:41:09.431 00:41:09.431 ' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.431 --rc genhtml_branch_coverage=1 00:41:09.431 --rc genhtml_function_coverage=1 00:41:09.431 --rc genhtml_legend=1 00:41:09.431 --rc geninfo_all_blocks=1 00:41:09.431 --rc geninfo_unexecuted_blocks=1 00:41:09.431 00:41:09.431 ' 00:41:09.431 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:09.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.431 --rc genhtml_branch_coverage=1 00:41:09.431 --rc genhtml_function_coverage=1 00:41:09.431 --rc genhtml_legend=1 00:41:09.431 --rc geninfo_all_blocks=1 00:41:09.431 --rc geninfo_unexecuted_blocks=1 00:41:09.432 00:41:09.432 ' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:09.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:09.432 --rc genhtml_branch_coverage=1 00:41:09.432 --rc genhtml_function_coverage=1 00:41:09.432 --rc genhtml_legend=1 00:41:09.432 --rc geninfo_all_blocks=1 00:41:09.432 --rc geninfo_unexecuted_blocks=1 00:41:09.432 00:41:09.432 ' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:09.432 22:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:17.570 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:17.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:17.571 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:17.571 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:17.571 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:17.571 22:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:17.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:17.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:41:17.571 00:41:17.571 --- 10.0.0.2 ping statistics --- 00:41:17.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.571 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:17.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:17.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:41:17.571 00:41:17.571 --- 10.0.0.1 ping statistics --- 00:41:17.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.571 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:17.571 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:17.572 only one NIC for nvmf test 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:17.572 rmmod nvme_tcp 00:41:17.572 rmmod nvme_fabrics 00:41:17.572 rmmod nvme_keyring 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:17.572 22:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:18.955 00:41:18.955 real 0m9.821s 00:41:18.955 user 0m2.165s 00:41:18.955 sys 0m5.595s 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:18.955 ************************************ 00:41:18.955 END TEST nvmf_target_multipath 00:41:18.955 ************************************ 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:18.955 ************************************ 00:41:18.955 START TEST nvmf_zcopy 00:41:18.955 ************************************ 00:41:18.955 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:18.955 * Looking for test storage... 00:41:19.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:19.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.216 --rc genhtml_branch_coverage=1 00:41:19.216 --rc genhtml_function_coverage=1 00:41:19.216 --rc genhtml_legend=1 00:41:19.216 --rc geninfo_all_blocks=1 00:41:19.216 --rc geninfo_unexecuted_blocks=1 00:41:19.216 00:41:19.216 ' 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:19.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.216 --rc genhtml_branch_coverage=1 00:41:19.216 --rc genhtml_function_coverage=1 00:41:19.216 --rc genhtml_legend=1 00:41:19.216 --rc geninfo_all_blocks=1 00:41:19.216 --rc geninfo_unexecuted_blocks=1 00:41:19.216 00:41:19.216 ' 00:41:19.216 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:19.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.216 --rc genhtml_branch_coverage=1 00:41:19.216 --rc genhtml_function_coverage=1 00:41:19.217 --rc genhtml_legend=1 00:41:19.217 --rc geninfo_all_blocks=1 00:41:19.217 --rc geninfo_unexecuted_blocks=1 00:41:19.217 00:41:19.217 ' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:19.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.217 --rc genhtml_branch_coverage=1 00:41:19.217 --rc genhtml_function_coverage=1 00:41:19.217 --rc genhtml_legend=1 00:41:19.217 --rc geninfo_all_blocks=1 00:41:19.217 --rc geninfo_unexecuted_blocks=1 00:41:19.217 00:41:19.217 ' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:19.217 22:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:27.356 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:27.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:27.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:27.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:27.357 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:27.357 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:27.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:41:27.358 00:41:27.358 --- 10.0.0.2 ping statistics --- 00:41:27.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.358 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:27.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:41:27.358 00:41:27.358 --- 10.0.0.1 ping statistics --- 00:41:27.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.358 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.358 22:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3844818 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3844818 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3844818 ']' 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:27.358 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.358 [2024-10-12 22:29:45.054806] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:27.358 [2024-10-12 22:29:45.055909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:27.358 [2024-10-12 22:29:45.055960] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:27.358 [2024-10-12 22:29:45.145788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.358 [2024-10-12 22:29:45.191486] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:27.358 [2024-10-12 22:29:45.191541] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:27.358 [2024-10-12 22:29:45.191550] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:27.358 [2024-10-12 22:29:45.191557] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:27.358 [2024-10-12 22:29:45.191564] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:27.358 [2024-10-12 22:29:45.191596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.358 [2024-10-12 22:29:45.256176] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.358 [2024-10-12 22:29:45.256444] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 [2024-10-12 22:29:45.936471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 [2024-10-12 22:29:45.964746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 malloc0 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:27.619 { 00:41:27.619 "params": { 00:41:27.619 "name": "Nvme$subsystem", 00:41:27.619 "trtype": "$TEST_TRANSPORT", 00:41:27.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.619 "adrfam": "ipv4", 00:41:27.619 "trsvcid": "$NVMF_PORT", 00:41:27.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.619 "hdgst": ${hdgst:-false}, 00:41:27.619 "ddgst": ${ddgst:-false} 00:41:27.619 }, 00:41:27.619 "method": "bdev_nvme_attach_controller" 00:41:27.619 } 00:41:27.619 EOF 00:41:27.619 )") 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:41:27.619 22:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:27.619 "params": { 00:41:27.619 "name": "Nvme1", 00:41:27.619 "trtype": "tcp", 00:41:27.619 "traddr": "10.0.0.2", 00:41:27.619 "adrfam": "ipv4", 00:41:27.619 "trsvcid": "4420", 00:41:27.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.619 "hdgst": false, 00:41:27.619 "ddgst": false 00:41:27.619 }, 00:41:27.619 "method": "bdev_nvme_attach_controller" 00:41:27.619 }' 00:41:27.619 [2024-10-12 22:29:46.088817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:27.619 [2024-10-12 22:29:46.088886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845139 ] 00:41:27.880 [2024-10-12 22:29:46.172113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.880 [2024-10-12 22:29:46.218990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.140 Running I/O for 10 seconds... 00:41:30.090 6329.00 IOPS, 49.45 MiB/s [2024-10-12T20:29:49.963Z] 6374.50 IOPS, 49.80 MiB/s [2024-10-12T20:29:50.906Z] 6395.67 IOPS, 49.97 MiB/s [2024-10-12T20:29:51.848Z] 6391.00 IOPS, 49.93 MiB/s [2024-10-12T20:29:52.789Z] 6730.40 IOPS, 52.58 MiB/s [2024-10-12T20:29:53.730Z] 7201.83 IOPS, 56.26 MiB/s [2024-10-12T20:29:54.671Z] 7532.29 IOPS, 58.85 MiB/s [2024-10-12T20:29:55.613Z] 7779.62 IOPS, 60.78 MiB/s [2024-10-12T20:29:56.998Z] 7978.11 IOPS, 62.33 MiB/s [2024-10-12T20:29:56.998Z] 8133.90 IOPS, 63.55 MiB/s 00:41:38.509 Latency(us) 00:41:38.509 [2024-10-12T20:29:56.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:38.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:38.509 Verification LBA range: start 0x0 length 0x1000 00:41:38.509 Nvme1n1 : 10.01 8138.65 63.58 0.00 0.00 15680.48 1099.09 28617.39 00:41:38.509 [2024-10-12T20:29:56.998Z] =================================================================================================================== 00:41:38.509 [2024-10-12T20:29:56.998Z] Total : 8138.65 63.58 0.00 0.00 15680.48 1099.09 28617.39 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3847003 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:41:38.509 { 00:41:38.509 "params": { 00:41:38.509 "name": "Nvme$subsystem", 00:41:38.509 "trtype": "$TEST_TRANSPORT", 00:41:38.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.509 "adrfam": "ipv4", 00:41:38.509 "trsvcid": "$NVMF_PORT", 00:41:38.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.509 "hdgst": ${hdgst:-false}, 00:41:38.509 "ddgst": ${ddgst:-false} 00:41:38.509 }, 00:41:38.509 "method": "bdev_nvme_attach_controller" 00:41:38.509 } 00:41:38.509 EOF 00:41:38.509 )") 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:41:38.509 [2024-10-12 22:29:56.683985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.684012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:41:38.509 22:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:41:38.509 "params": { 00:41:38.509 "name": "Nvme1", 00:41:38.509 "trtype": "tcp", 00:41:38.509 "traddr": "10.0.0.2", 00:41:38.509 "adrfam": "ipv4", 00:41:38.509 "trsvcid": "4420", 00:41:38.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:38.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:38.509 "hdgst": false, 00:41:38.509 "ddgst": false 00:41:38.509 }, 00:41:38.509 "method": "bdev_nvme_attach_controller" 00:41:38.509 }' 00:41:38.509 [2024-10-12 22:29:56.695957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.695967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.703953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.703961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.715953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.715961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.724463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:41:38.509 [2024-10-12 22:29:56.724512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3847003 ] 00:41:38.509 [2024-10-12 22:29:56.727951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.727959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.739951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.739960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.751951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.509 [2024-10-12 22:29:56.751960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.509 [2024-10-12 22:29:56.763952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.763961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.775952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.775965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.787951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.787960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.798838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:38.510 [2024-10-12 22:29:56.799952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.799961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.811954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.811964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.823955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.823970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.827069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:41:38.510 [2024-10-12 22:29:56.835951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.835960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.847958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.847973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.859955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.859964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.871954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.871964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.883951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.883961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.895960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.895977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.907953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.907964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.919954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.919965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.931952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.931962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.943952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.943960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.955951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.955959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.967952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.967962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.979952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.979962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.510 [2024-10-12 22:29:56.991951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.510 [2024-10-12 22:29:56.991965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.003952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.003962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.015953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.015963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.027952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.027961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.039951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.039960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.051951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.051960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.065447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.065462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 [2024-10-12 22:29:57.075954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.075965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.770 Running I/O for 5 seconds... 00:41:38.770 [2024-10-12 22:29:57.092226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.770 [2024-10-12 22:29:57.092243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.104223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.104238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.119344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.119360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.132175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.132190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.143676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.143691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.156505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.156520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.170961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.170976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.183825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.183840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.196182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.196196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.211254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.211270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.223734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.223749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.236184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.236200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:38.771 [2024-10-12 22:29:57.251032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:38.771 [2024-10-12 22:29:57.251048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.031 [2024-10-12 22:29:57.263978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.031 [2024-10-12 22:29:57.263994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.031 [2024-10-12 22:29:57.276518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.031 [2024-10-12 22:29:57.276533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.031 [2024-10-12 22:29:57.291265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.031 [2024-10-12 22:29:57.291280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.031 [2024-10-12 22:29:57.304048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.304064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.316031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.316047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.328756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.328771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.342802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.342818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.355364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.355379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.367879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.367895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.380691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.380705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.395368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.395384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.408385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.408399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.423081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.423096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.435962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.435977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.448535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.448550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.462963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.462979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.476265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.476279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.491508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.491523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.504152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.504167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.032 [2024-10-12 22:29:57.516734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.032 [2024-10-12 22:29:57.516749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.531463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.531479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.544123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.544138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.556992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.557007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.571404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.571419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.584379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.584394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.599215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.599230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.612268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.612282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.626630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.626645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.639899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.639915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.652718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.652733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.667150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.667166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.680079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.680095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.692646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.692661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.707263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.707278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.720222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.720236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.735201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.735217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.748626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.748641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.763296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.763311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.293 [2024-10-12 22:29:57.776011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.293 [2024-10-12 22:29:57.776026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.788577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.788591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.803476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.803491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.816494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.816509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.831139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.831154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.844074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.844089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.856389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.856404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.871133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.871149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.884472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.884486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.898560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.898575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.911711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.911726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.924384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.924399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.939379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.939394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.952133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.952147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.967440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.967455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.980416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.980431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:57.995101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:57.995125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:58.008067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.555 [2024-10-12 22:29:58.008081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.555 [2024-10-12 22:29:58.023373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.556 [2024-10-12 22:29:58.023388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.556 [2024-10-12 22:29:58.036442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.556 [2024-10-12 22:29:58.036456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.051039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.051055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.064129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.064145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.076768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.076783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 18882.00 IOPS, 147.52 MiB/s [2024-10-12T20:29:58.308Z] [2024-10-12 22:29:58.091350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.091365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.104427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.104442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.119213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.119228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.132168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.132183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.147425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.147441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.160496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.160511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.175163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.175179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.188408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.188423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.203343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.203359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.216164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.216178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.231055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.231070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.243817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.243833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.819 [2024-10-12 22:29:58.256257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.819 [2024-10-12 22:29:58.256276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.820 [2024-10-12 22:29:58.270905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.820 [2024-10-12 22:29:58.270920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.820 [2024-10-12 22:29:58.283964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.820 [2024-10-12 22:29:58.283979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:39.820 [2024-10-12 22:29:58.296235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:39.820 [2024-10-12 22:29:58.296250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.310575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.310591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.323607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.323623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.336350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.336365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.351250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.351266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.363970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.363985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.376360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.376374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.391064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.391080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.403966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.403981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.415986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.416002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.428389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.428403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.443267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.443282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.081 [2024-10-12 22:29:58.456205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.081 [2024-10-12 22:29:58.456219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.471084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.471100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.483951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.483968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.496373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.496388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.511165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.511184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.523672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.523688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.536560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.536575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.551482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.551498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.101 [2024-10-12 22:29:58.564252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.101 [2024-10-12 22:29:58.564268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.578967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.578982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.591835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.591851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.604342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.604357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.619113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.619129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.632169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.632186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.644466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.644481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.659155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.659171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.672385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.672401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.687047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.687062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.700070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.700087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.712704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.712720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.727182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.727198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.740143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.740159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.755050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.755066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.768270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.768289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.782687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.782702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.795473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.795489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.808100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.808119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.820527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.820542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.835142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.835157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.362 [2024-10-12 22:29:58.848652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.362 [2024-10-12 22:29:58.848668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.862844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.862861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.875989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.876005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.888801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.888817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.903192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.903208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.916591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.916606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.931601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.931617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.944574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.944591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.959133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.959149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.971544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.971559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.984157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.984173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:58.996022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:58.996037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.008753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.008768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.022698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.022713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.035595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.035609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.048186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.048200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.063231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.063247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.076127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.076142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 18903.50 IOPS, 147.68 MiB/s [2024-10-12T20:29:59.113Z] [2024-10-12 22:29:59.090768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.090783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.624 [2024-10-12 22:29:59.103800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.624 [2024-10-12 22:29:59.103815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.116153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.116168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.130098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.130116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.144383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.144397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.159124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.159139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.172141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.172156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.184477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.184491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.199270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.199285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.211995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.212010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.223765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.223780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.236533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.236547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.251165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.251180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.264037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.264052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.276619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.276634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.291276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.291290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.884 [2024-10-12 22:29:59.304273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.884 [2024-10-12 22:29:59.304288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.885 [2024-10-12 22:29:59.319217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.885 [2024-10-12 22:29:59.319232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.885 [2024-10-12 22:29:59.332116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.885 [2024-10-12 22:29:59.332130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.885 [2024-10-12 22:29:59.345133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.885 [2024-10-12 22:29:59.345148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:40.885 [2024-10-12 22:29:59.359319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:40.885 [2024-10-12 22:29:59.359334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.371853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.371870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.384596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.384611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.399449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.399464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.412111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.412125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.424759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.424774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.439824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.439838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.452471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.452485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.467539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.467555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.480266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.480280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.495206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.495221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.508366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.508380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.523185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.523205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.535789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.535804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.548702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.548716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.562633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.562648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.575313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.575328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.588442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.588457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.602953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.602968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.615630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.615646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.145 [2024-10-12 22:29:59.628176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.145 [2024-10-12 22:29:59.628190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.642836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.642852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.656134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.656148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.670858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.670874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.683785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.683800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.696376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.696390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.711363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.711379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.724427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.724442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.739231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.739246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.752341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.752356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.767045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.767060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.779774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.779793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.792532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.792547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.407 [2024-10-12 22:29:59.807299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.407 [2024-10-12 22:29:59.807314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.820389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.820403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.835013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.835028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.848301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.848315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.863454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.863469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.876301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.876316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.408 [2024-10-12 22:29:59.891263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.408 [2024-10-12 22:29:59.891278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.903945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.903961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.916402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.916416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.931250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.931266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.943949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.943964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.955590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.955605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.969157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.969172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.983403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.983419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:29:59.996551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:29:59.996566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.011534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.011554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.024519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.024535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.039594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.039615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.052018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.052034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.063656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.063672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.076435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.076451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 18918.67 IOPS, 147.80 MiB/s [2024-10-12T20:30:00.158Z] [2024-10-12 22:30:00.091474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.091490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.104300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.104315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.118910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.118925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.131828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.131844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.669 [2024-10-12 22:30:00.144735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.669 [2024-10-12 22:30:00.144750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.159686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.159702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.172469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.172484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.187532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.187548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.199904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.199919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.211912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.211928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.224610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.224625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.239171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.239186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.251696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.251711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.264269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.264284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.279281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.279297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.292153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.292169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.303741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.303757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.316264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.316279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.330923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.330938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.343905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.343921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.355555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.355570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.368231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.368246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.383218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.383234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.395897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.395914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:41.931 [2024-10-12 22:30:00.407911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:41.931 [2024-10-12 22:30:00.407927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.420090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.420110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.435441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.435456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.448047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.448063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.460342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.460357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.475113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.475129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.487841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.487856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.499936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.499952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.512782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.512796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.527698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.527714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.540557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.540572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.555057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.555072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.567969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.567985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.580699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.580714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.595039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.595055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.607865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.607881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.620261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.620276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.635298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.635313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.648145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.648160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.663011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.663027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.192 [2024-10-12 22:30:00.675543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.192 [2024-10-12 22:30:00.675558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.688156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.688171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.703168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.703183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.715992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.716008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.728297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.728312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.742971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.742987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.756457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.756473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.771290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.771305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.784252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.784267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.798902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.798917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.811588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.811603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.824089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.824107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.839672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.839687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.852544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.852558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.867458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.867473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.880167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.880182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.895382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.895398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.908063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.908078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.920969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.920985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.454 [2024-10-12 22:30:00.935251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.454 [2024-10-12 22:30:00.935266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:00.948593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:00.948609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:00.963437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:00.963452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:00.976206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:00.976221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:00.991278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:00.991293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.004349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.004364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.019534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.019550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.032175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.032190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.044510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.044525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.059299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.059315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.072270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.072285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.087000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.087015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 18917.00 IOPS, 147.79 MiB/s [2024-10-12T20:30:01.205Z] [2024-10-12 22:30:01.099734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.099749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.112908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.112923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.126722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.126737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.139675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.139690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.152437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.152452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.167535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.167550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.180278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.180294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.716 [2024-10-12 22:30:01.195164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.716 [2024-10-12 22:30:01.195180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.208207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.208223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.223215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.223230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.236090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.236109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.248028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.248043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.260863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.260878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.274866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.274882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.288163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.288179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.299348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.299368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.312114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.312129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.326758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.326773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.339704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.339719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.352096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.352114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.367247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.367263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.380140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.380155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.392780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.392796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.407709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.407724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.420548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.420563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.435444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.435459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.448037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.448052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:42.978 [2024-10-12 22:30:01.460584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:42.978 [2024-10-12 22:30:01.460598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.475184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.475200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.488174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.488189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.503304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.503321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.516252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.516266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.531120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.531136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.543973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.543989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.555633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.555654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.568684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.568699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.583486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.583501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.595886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.595903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.607975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.607991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.620922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.620937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.636123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.636139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.648999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.649014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.662996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.663011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.676074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.676089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.690551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.690567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.703988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.704003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.239 [2024-10-12 22:30:01.716426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.239 [2024-10-12 22:30:01.716442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.731054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.731070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.743781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.743796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.756930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.756945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.770899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.770915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.783884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.783901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.795805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.795821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.808449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.808470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.823455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.823471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.836303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.836318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.851190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.851206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.864394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.864408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.500 [2024-10-12 22:30:01.879036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.500 [2024-10-12 22:30:01.879051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.892144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.892160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.903982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.903997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.916813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.916828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.931799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.931815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.944678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.944693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.959019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.959035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.971846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.971861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.501 [2024-10-12 22:30:01.984129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.501 [2024-10-12 22:30:01.984145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:01.998836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:01.998852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.012082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.012098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.024591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.024606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.039039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.039055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.052008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.052024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.063839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.063854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.076483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.076498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 [2024-10-12 22:30:02.091083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.762 [2024-10-12 22:30:02.091100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.762 18921.40 IOPS, 147.82 MiB/s 00:41:43.762 Latency(us) 00:41:43.762 [2024-10-12T20:30:02.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.763 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:43.763 Nvme1n1 : 5.01 18921.43 147.82 0.00 0.00 6759.05 2798.93 11250.35 00:41:43.763 [2024-10-12T20:30:02.252Z] =================================================================================================================== 00:41:43.763 [2024-10-12T20:30:02.252Z] Total : 18921.43 147.82 0.00 0.00 6759.05 2798.93 11250.35 00:41:43.763 [2024-10-12 22:30:02.099960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.099975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.111956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.111970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.123962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.123974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.135958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.135971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.147955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.147966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.159955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.159965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.171954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.171964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.183956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.183967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.195955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.195966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 [2024-10-12 22:30:02.207952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:43.763 [2024-10-12 22:30:02.207961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:43.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3847003) - No such process 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3847003 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:43.763 delay0 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:43.763 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:44.025 22:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:44.025 [2024-10-12 22:30:02.359532] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:50.616 [2024-10-12 22:30:08.622912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e8a00 is same with the state(6) to be set 00:41:50.616 Initializing NVMe Controllers 00:41:50.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:50.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:50.616 Initialization complete. Launching workers. 00:41:50.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2246 00:41:50.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2525, failed to submit 41 00:41:50.616 success 2337, unsuccessful 188, failed 0 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:50.616 rmmod nvme_tcp 00:41:50.616 rmmod nvme_fabrics 00:41:50.616 rmmod nvme_keyring 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3844818 ']' 00:41:50.616 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3844818 ']' 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3844818' 00:41:50.617 killing process with pid 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3844818 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:50.617 22:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:52.532 00:41:52.532 real 0m33.614s 00:41:52.532 user 0m42.551s 00:41:52.532 sys 0m12.308s 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:52.532 ************************************ 00:41:52.532 END TEST nvmf_zcopy 00:41:52.532 ************************************ 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:52.532 22:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:52.794 ************************************ 00:41:52.794 START TEST nvmf_nmic 00:41:52.794 ************************************ 00:41:52.794 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:41:52.794 * Looking for test storage... 00:41:52.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:52.794 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:52.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.795 --rc genhtml_branch_coverage=1 00:41:52.795 --rc genhtml_function_coverage=1 00:41:52.795 --rc genhtml_legend=1 00:41:52.795 --rc geninfo_all_blocks=1 00:41:52.795 --rc geninfo_unexecuted_blocks=1 00:41:52.795 00:41:52.795 ' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:52.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.795 --rc genhtml_branch_coverage=1 00:41:52.795 --rc genhtml_function_coverage=1 00:41:52.795 --rc genhtml_legend=1 00:41:52.795 --rc geninfo_all_blocks=1 00:41:52.795 --rc geninfo_unexecuted_blocks=1 00:41:52.795 00:41:52.795 ' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:52.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.795 --rc genhtml_branch_coverage=1 00:41:52.795 --rc genhtml_function_coverage=1 00:41:52.795 --rc genhtml_legend=1 00:41:52.795 --rc geninfo_all_blocks=1 00:41:52.795 --rc geninfo_unexecuted_blocks=1 00:41:52.795 00:41:52.795 ' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:52.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.795 --rc genhtml_branch_coverage=1 00:41:52.795 --rc genhtml_function_coverage=1 00:41:52.795 --rc genhtml_legend=1 00:41:52.795 --rc geninfo_all_blocks=1 00:41:52.795 --rc geninfo_unexecuted_blocks=1 00:41:52.795 00:41:52.795 ' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:41:52.795 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:41:52.796 22:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:00.932 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:00.932 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:00.932 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:00.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:00.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:00.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:42:00.932 00:42:00.932 --- 10.0.0.2 ping statistics --- 00:42:00.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.932 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:00.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:00.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:42:00.932 00:42:00.932 --- 10.0.0.1 ping statistics --- 00:42:00.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.932 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:00.932 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3853910 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3853910 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3853910 ']' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:00.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 [2024-10-12 22:30:18.542475] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:00.933 [2024-10-12 22:30:18.543591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:00.933 [2024-10-12 22:30:18.543642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:00.933 [2024-10-12 22:30:18.618631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:00.933 [2024-10-12 22:30:18.667054] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:00.933 [2024-10-12 22:30:18.667098] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:00.933 [2024-10-12 22:30:18.667112] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:00.933 [2024-10-12 22:30:18.667122] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:00.933 [2024-10-12 22:30:18.667128] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:00.933 [2024-10-12 22:30:18.667231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:00.933 [2024-10-12 22:30:18.667406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:00.933 [2024-10-12 22:30:18.667568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.933 [2024-10-12 22:30:18.667570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:00.933 [2024-10-12 22:30:18.735362] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:00.933 [2024-10-12 22:30:18.736949] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:00.933 [2024-10-12 22:30:18.737057] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:00.933 [2024-10-12 22:30:18.737665] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:00.933 [2024-10-12 22:30:18.737729] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 [2024-10-12 22:30:18.824462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 Malloc0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 [2024-10-12 22:30:18.908605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:00.933 test case1: single bdev can't be used in multiple subsystems 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 [2024-10-12 22:30:18.944046] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:00.933 [2024-10-12 22:30:18.944076] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:00.933 [2024-10-12 22:30:18.944085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:00.933 request: 00:42:00.933 { 00:42:00.933 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:00.933 "namespace": { 00:42:00.933 "bdev_name": "Malloc0", 00:42:00.933 "no_auto_visible": false 00:42:00.933 }, 00:42:00.933 "method": "nvmf_subsystem_add_ns", 00:42:00.933 "req_id": 1 00:42:00.933 } 00:42:00.933 Got JSON-RPC error response 00:42:00.933 response: 00:42:00.933 { 00:42:00.933 "code": -32602, 00:42:00.933 "message": "Invalid parameters" 00:42:00.933 } 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:00.933 Adding namespace failed - expected result. 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:00.933 test case2: host connect to nvmf target in multiple paths 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.933 [2024-10-12 22:30:18.956221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:00.933 22:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:00.934 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:01.504 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:01.504 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:42:01.504 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:01.505 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:01.505 22:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:42:03.418 22:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:03.418 [global] 00:42:03.418 thread=1 00:42:03.418 invalidate=1 00:42:03.418 rw=write 00:42:03.418 time_based=1 00:42:03.418 runtime=1 00:42:03.418 ioengine=libaio 00:42:03.418 direct=1 00:42:03.418 bs=4096 00:42:03.418 iodepth=1 00:42:03.418 norandommap=0 00:42:03.418 numjobs=1 00:42:03.418 00:42:03.418 verify_dump=1 00:42:03.418 verify_backlog=512 00:42:03.418 verify_state_save=0 00:42:03.418 do_verify=1 00:42:03.418 verify=crc32c-intel 00:42:03.418 [job0] 00:42:03.418 filename=/dev/nvme0n1 00:42:03.699 Could not set queue depth (nvme0n1) 00:42:03.960 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:03.960 fio-3.35 00:42:03.960 Starting 1 thread 00:42:04.901 00:42:04.901 job0: (groupid=0, jobs=1): err= 0: pid=3854767: Sat Oct 12 22:30:23 2024 00:42:04.901 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:04.901 slat (nsec): min=8240, max=45204, avg=24455.38, stdev=3935.44 00:42:04.901 clat (usec): min=759, max=1201, avg=1009.55, stdev=75.51 00:42:04.901 lat (usec): min=768, max=1239, avg=1034.00, stdev=76.58 00:42:04.901 clat percentiles (usec): 00:42:04.901 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 955], 00:42:04.901 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1037], 00:42:04.901 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:42:04.901 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1205], 99.95th=[ 1205], 00:42:04.901 | 99.99th=[ 1205] 00:42:04.901 write: IOPS=756, BW=3025KiB/s (3098kB/s)(3028KiB/1001msec); 0 zone resets 00:42:04.901 slat (nsec): min=9443, max=67367, avg=27094.18, stdev=9908.36 00:42:04.901 clat (usec): min=337, max=832, avg=582.62, stdev=91.40 00:42:04.901 lat (usec): min=349, max=864, avg=609.71, stdev=96.52 00:42:04.901 clat percentiles (usec): 00:42:04.901 | 1.00th=[ 363], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 502], 00:42:04.901 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:42:04.901 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 717], 00:42:04.901 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 832], 99.95th=[ 832], 00:42:04.901 | 99.99th=[ 832] 00:42:04.901 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:04.901 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:04.901 lat (usec) : 500=11.51%, 750=46.81%, 1000=17.89% 00:42:04.901 lat (msec) : 2=23.80% 00:42:04.901 cpu : usr=1.80%, sys=3.40%, ctx=1269, majf=0, minf=1 00:42:04.901 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:04.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:04.901 issued rwts: total=512,757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:04.901 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:04.901 00:42:04.901 Run status group 0 (all jobs): 00:42:04.901 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:42:04.901 WRITE: bw=3025KiB/s (3098kB/s), 3025KiB/s-3025KiB/s (3098kB/s-3098kB/s), io=3028KiB (3101kB), run=1001-1001msec 00:42:04.901 00:42:04.901 Disk stats (read/write): 00:42:04.901 nvme0n1: ios=562/593, merge=0/0, ticks=563/328, in_queue=891, util=93.79% 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:05.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:05.162 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:05.162 rmmod nvme_tcp 00:42:05.162 rmmod nvme_fabrics 00:42:05.162 rmmod nvme_keyring 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3853910 ']' 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3853910 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3853910 ']' 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3853910 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3853910 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:05.422 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3853910' 00:42:05.423 killing process with pid 3853910 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3853910 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3853910 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:05.423 22:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:07.966 22:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:07.966 00:42:07.966 real 0m14.919s 00:42:07.966 user 0m36.187s 00:42:07.966 sys 0m6.989s 00:42:07.966 22:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:07.966 22:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:07.966 ************************************ 00:42:07.966 END TEST nvmf_nmic 00:42:07.966 ************************************ 00:42:07.966 22:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:07.966 22:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:07.966 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:07.966 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:07.966 ************************************ 00:42:07.966 START TEST nvmf_fio_target 00:42:07.966 ************************************ 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:07.967 * Looking for test storage... 00:42:07.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:07.967 --rc genhtml_branch_coverage=1 00:42:07.967 --rc genhtml_function_coverage=1 00:42:07.967 --rc genhtml_legend=1 00:42:07.967 --rc geninfo_all_blocks=1 00:42:07.967 --rc geninfo_unexecuted_blocks=1 00:42:07.967 00:42:07.967 ' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:07.967 --rc genhtml_branch_coverage=1 00:42:07.967 --rc genhtml_function_coverage=1 00:42:07.967 --rc genhtml_legend=1 00:42:07.967 --rc geninfo_all_blocks=1 00:42:07.967 --rc geninfo_unexecuted_blocks=1 00:42:07.967 00:42:07.967 ' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:07.967 --rc genhtml_branch_coverage=1 00:42:07.967 --rc genhtml_function_coverage=1 00:42:07.967 --rc genhtml_legend=1 00:42:07.967 --rc geninfo_all_blocks=1 00:42:07.967 --rc geninfo_unexecuted_blocks=1 00:42:07.967 00:42:07.967 ' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:07.967 --rc genhtml_branch_coverage=1 00:42:07.967 --rc genhtml_function_coverage=1 00:42:07.967 --rc genhtml_legend=1 00:42:07.967 --rc geninfo_all_blocks=1 00:42:07.967 --rc geninfo_unexecuted_blocks=1 00:42:07.967 00:42:07.967 ' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:07.967 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:07.968 22:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:16.109 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:16.109 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:16.109 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:16.109 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:16.109 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:16.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:16.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:42:16.110 00:42:16.110 --- 10.0.0.2 ping statistics --- 00:42:16.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:16.110 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:16.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:16.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:42:16.110 00:42:16.110 --- 10.0.0.1 ping statistics --- 00:42:16.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:16.110 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3859256 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3859256 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3859256 ']' 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:16.110 22:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.110 [2024-10-12 22:30:33.770038] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:16.110 [2024-10-12 22:30:33.771193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:16.110 [2024-10-12 22:30:33.771243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:16.110 [2024-10-12 22:30:33.860428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.110 [2024-10-12 22:30:33.907282] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.110 [2024-10-12 22:30:33.907335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.110 [2024-10-12 22:30:33.907343] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.110 [2024-10-12 22:30:33.907350] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.110 [2024-10-12 22:30:33.907357] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.110 [2024-10-12 22:30:33.907511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.110 [2024-10-12 22:30:33.907656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.110 [2024-10-12 22:30:33.907815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.110 [2024-10-12 22:30:33.907817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.110 [2024-10-12 22:30:33.976565] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:16.110 [2024-10-12 22:30:33.978133] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:16.110 [2024-10-12 22:30:33.978165] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:16.110 [2024-10-12 22:30:33.978963] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:16.110 [2024-10-12 22:30:33.978987] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:16.110 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:16.110 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:42:16.110 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:16.110 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:16.110 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.371 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:16.371 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:16.371 [2024-10-12 22:30:34.772689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.371 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:16.632 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:16.632 22:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:16.892 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:16.892 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:16.892 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:16.892 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:17.154 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:17.154 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:17.414 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:17.414 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:17.414 22:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:17.675 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:17.675 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:17.936 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:17.936 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:18.197 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:18.197 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:18.197 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:18.459 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:18.459 22:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:18.720 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:18.720 [2024-10-12 22:30:37.192689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:18.981 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:42:18.981 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:42:19.241 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:42:19.810 22:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:42:21.721 22:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:21.721 22:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:21.721 22:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:21.721 22:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:42:21.721 22:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:21.721 22:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:42:21.721 22:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:21.721 [global] 00:42:21.721 thread=1 00:42:21.721 invalidate=1 00:42:21.721 rw=write 00:42:21.721 time_based=1 00:42:21.721 runtime=1 00:42:21.721 ioengine=libaio 00:42:21.721 direct=1 00:42:21.721 bs=4096 00:42:21.721 iodepth=1 00:42:21.721 norandommap=0 00:42:21.721 numjobs=1 00:42:21.721 00:42:21.721 verify_dump=1 00:42:21.721 verify_backlog=512 00:42:21.721 verify_state_save=0 00:42:21.721 do_verify=1 00:42:21.721 verify=crc32c-intel 00:42:21.721 [job0] 00:42:21.721 filename=/dev/nvme0n1 00:42:21.721 [job1] 00:42:21.721 filename=/dev/nvme0n2 00:42:21.721 [job2] 00:42:21.721 filename=/dev/nvme0n3 00:42:21.721 [job3] 00:42:21.721 filename=/dev/nvme0n4 00:42:21.721 Could not set queue depth (nvme0n1) 00:42:21.721 Could not set queue depth (nvme0n2) 00:42:21.721 Could not set queue depth (nvme0n3) 00:42:21.721 Could not set queue depth (nvme0n4) 00:42:21.981 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:21.981 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:21.981 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:21.981 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:21.981 fio-3.35 00:42:21.981 Starting 4 threads 00:42:23.361 00:42:23.361 job0: (groupid=0, jobs=1): err= 0: pid=3860577: Sat Oct 12 22:30:41 2024 00:42:23.361 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:42:23.361 slat (nsec): min=25869, max=26700, avg=26167.82, stdev=205.37 00:42:23.361 clat (usec): min=40827, max=41849, avg=41155.93, stdev=342.38 00:42:23.361 lat (usec): min=40853, max=41875, avg=41182.10, stdev=342.30 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:42:23.361 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:23.361 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:42:23.361 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:42:23.361 | 99.99th=[41681] 00:42:23.361 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:42:23.361 slat (nsec): min=9534, max=54628, avg=33223.10, stdev=5576.63 00:42:23.361 clat (usec): min=105, max=873, avg=553.73, stdev=140.67 00:42:23.361 lat (usec): min=115, max=907, avg=586.95, stdev=141.31 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[ 253], 5.00th=[ 326], 10.00th=[ 363], 20.00th=[ 429], 00:42:23.361 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 553], 60.00th=[ 627], 00:42:23.361 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:42:23.361 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 873], 99.95th=[ 873], 00:42:23.361 | 99.99th=[ 873] 00:42:23.361 bw ( KiB/s): min= 4096, max= 4096, per=42.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:23.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:23.361 lat (usec) : 250=0.95%, 500=38.56%, 750=52.36%, 1000=4.91% 00:42:23.361 lat (msec) : 50=3.21% 00:42:23.361 cpu : usr=1.00%, sys=2.29%, ctx=529, majf=0, minf=1 00:42:23.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.361 job1: (groupid=0, jobs=1): err= 0: pid=3860586: Sat Oct 12 22:30:41 2024 00:42:23.361 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:23.361 slat (nsec): min=7069, max=62391, avg=27149.90, stdev=3375.24 00:42:23.361 clat (usec): min=512, max=1301, avg=1018.55, stdev=142.75 00:42:23.361 lat (usec): min=539, max=1328, avg=1045.70, stdev=142.88 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[ 635], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 889], 00:42:23.361 | 30.00th=[ 963], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1074], 00:42:23.361 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:42:23.361 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1303], 00:42:23.361 | 99.99th=[ 1303] 00:42:23.361 write: IOPS=707, BW=2829KiB/s (2897kB/s)(2832KiB/1001msec); 0 zone resets 00:42:23.361 slat (nsec): min=9460, max=72624, avg=32585.09, stdev=8999.33 00:42:23.361 clat (usec): min=221, max=1047, avg=608.62, stdev=143.09 00:42:23.361 lat (usec): min=255, max=1082, avg=641.20, stdev=145.71 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 486], 00:42:23.361 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:42:23.361 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 848], 00:42:23.361 | 99.00th=[ 979], 99.50th=[ 1029], 99.90th=[ 1045], 99.95th=[ 1045], 00:42:23.361 | 99.99th=[ 1045] 00:42:23.361 bw ( KiB/s): min= 4096, max= 4096, per=42.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:23.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:23.361 lat (usec) : 250=0.16%, 500=13.03%, 750=38.69%, 1000=21.07% 00:42:23.361 lat (msec) : 2=27.05% 00:42:23.361 cpu : usr=2.60%, sys=4.90%, ctx=1223, majf=0, minf=1 00:42:23.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 issued rwts: total=512,708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.361 job2: (groupid=0, jobs=1): err= 0: pid=3860599: Sat Oct 12 22:30:41 2024 00:42:23.361 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:23.361 slat (nsec): min=8702, max=56041, avg=27005.30, stdev=2710.85 00:42:23.361 clat (usec): min=466, max=1454, avg=1011.54, stdev=113.39 00:42:23.361 lat (usec): min=492, max=1481, avg=1038.54, stdev=113.55 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[ 725], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:42:23.361 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1029], 00:42:23.361 | 70.00th=[ 1057], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:42:23.361 | 99.00th=[ 1270], 99.50th=[ 1270], 99.90th=[ 1450], 99.95th=[ 1450], 00:42:23.361 | 99.99th=[ 1450] 00:42:23.361 write: IOPS=717, BW=2869KiB/s (2938kB/s)(2872KiB/1001msec); 0 zone resets 00:42:23.361 slat (nsec): min=9025, max=70573, avg=30995.52, stdev=9182.69 00:42:23.361 clat (usec): min=244, max=965, avg=606.89, stdev=115.55 00:42:23.361 lat (usec): min=254, max=999, avg=637.88, stdev=119.09 00:42:23.361 clat percentiles (usec): 00:42:23.361 | 1.00th=[ 326], 5.00th=[ 396], 10.00th=[ 449], 20.00th=[ 519], 00:42:23.361 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 635], 00:42:23.361 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:42:23.361 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 963], 00:42:23.361 | 99.99th=[ 963] 00:42:23.361 bw ( KiB/s): min= 4096, max= 4096, per=42.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:23.361 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:23.361 lat (usec) : 250=0.08%, 500=10.08%, 750=42.93%, 1000=26.91% 00:42:23.361 lat (msec) : 2=20.00% 00:42:23.361 cpu : usr=3.30%, sys=4.10%, ctx=1231, majf=0, minf=1 00:42:23.361 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.361 issued rwts: total=512,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.361 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.361 job3: (groupid=0, jobs=1): err= 0: pid=3860605: Sat Oct 12 22:30:41 2024 00:42:23.361 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:42:23.361 slat (nsec): min=25918, max=27040, avg=26380.71, stdev=307.97 00:42:23.362 clat (usec): min=41096, max=42118, avg=41860.92, stdev=264.71 00:42:23.362 lat (usec): min=41123, max=42145, avg=41887.30, stdev=264.61 00:42:23.362 clat percentiles (usec): 00:42:23.362 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:42:23.362 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:42:23.362 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:23.362 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:23.362 | 99.99th=[42206] 00:42:23.362 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:42:23.362 slat (nsec): min=9004, max=80854, avg=29871.03, stdev=9661.21 00:42:23.362 clat (usec): min=123, max=1002, avg=534.92, stdev=155.70 00:42:23.362 lat (usec): min=132, max=1036, avg=564.79, stdev=159.81 00:42:23.362 clat percentiles (usec): 00:42:23.362 | 1.00th=[ 233], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 388], 00:42:23.362 | 30.00th=[ 449], 40.00th=[ 494], 50.00th=[ 537], 60.00th=[ 578], 00:42:23.362 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 799], 00:42:23.362 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 1004], 99.95th=[ 1004], 00:42:23.362 | 99.99th=[ 1004] 00:42:23.362 bw ( KiB/s): min= 4096, max= 4096, per=42.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:23.362 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:23.362 lat (usec) : 250=1.89%, 500=38.37%, 750=49.15%, 1000=7.18% 00:42:23.362 lat (msec) : 2=0.19%, 50=3.21% 00:42:23.362 cpu : usr=1.00%, sys=1.99%, ctx=530, majf=0, minf=1 00:42:23.362 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.362 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.362 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.362 00:42:23.362 Run status group 0 (all jobs): 00:42:23.362 READ: bw=4211KiB/s (4312kB/s), 67.7KiB/s-2046KiB/s (69.3kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1005msec 00:42:23.362 WRITE: bw=9751KiB/s (9985kB/s), 2038KiB/s-2869KiB/s (2087kB/s-2938kB/s), io=9800KiB (10.0MB), run=1001-1005msec 00:42:23.362 00:42:23.362 Disk stats (read/write): 00:42:23.362 nvme0n1: ios=62/512, merge=0/0, ticks=528/215, in_queue=743, util=86.07% 00:42:23.362 nvme0n2: ios=499/512, merge=0/0, ticks=1300/247, in_queue=1547, util=88.46% 00:42:23.362 nvme0n3: ios=527/512, merge=0/0, ticks=554/254, in_queue=808, util=95.45% 00:42:23.362 nvme0n4: ios=69/512, merge=0/0, ticks=587/205, in_queue=792, util=96.68% 00:42:23.362 22:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:23.362 [global] 00:42:23.362 thread=1 00:42:23.362 invalidate=1 00:42:23.362 rw=randwrite 00:42:23.362 time_based=1 00:42:23.362 runtime=1 00:42:23.362 ioengine=libaio 00:42:23.362 direct=1 00:42:23.362 bs=4096 00:42:23.362 iodepth=1 00:42:23.362 norandommap=0 00:42:23.362 numjobs=1 00:42:23.362 00:42:23.362 verify_dump=1 00:42:23.362 verify_backlog=512 00:42:23.362 verify_state_save=0 00:42:23.362 do_verify=1 00:42:23.362 verify=crc32c-intel 00:42:23.362 [job0] 00:42:23.362 filename=/dev/nvme0n1 00:42:23.362 [job1] 00:42:23.362 filename=/dev/nvme0n2 00:42:23.362 [job2] 00:42:23.362 filename=/dev/nvme0n3 00:42:23.362 [job3] 00:42:23.362 filename=/dev/nvme0n4 00:42:23.362 Could not set queue depth (nvme0n1) 00:42:23.362 Could not set queue depth (nvme0n2) 00:42:23.362 Could not set queue depth (nvme0n3) 00:42:23.362 Could not set queue depth (nvme0n4) 00:42:23.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:23.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:23.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:23.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:23.930 fio-3.35 00:42:23.930 Starting 4 threads 00:42:25.311 00:42:25.311 job0: (groupid=0, jobs=1): err= 0: pid=3861083: Sat Oct 12 22:30:43 2024 00:42:25.311 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:25.311 slat (nsec): min=7412, max=46717, avg=27838.52, stdev=4588.96 00:42:25.311 clat (usec): min=769, max=1286, avg=1046.48, stdev=91.39 00:42:25.311 lat (usec): min=798, max=1313, avg=1074.32, stdev=91.77 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 832], 5.00th=[ 898], 10.00th=[ 922], 20.00th=[ 971], 00:42:25.311 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:42:25.311 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1188], 00:42:25.311 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1287], 00:42:25.311 | 99.99th=[ 1287] 00:42:25.311 write: IOPS=645, BW=2581KiB/s (2643kB/s)(2584KiB/1001msec); 0 zone resets 00:42:25.311 slat (nsec): min=9322, max=56813, avg=31801.00, stdev=9084.56 00:42:25.311 clat (usec): min=252, max=1066, avg=650.59, stdev=139.74 00:42:25.311 lat (usec): min=262, max=1101, avg=682.39, stdev=143.03 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 326], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 529], 00:42:25.311 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 693], 00:42:25.311 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 824], 95.00th=[ 873], 00:42:25.311 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1074], 99.95th=[ 1074], 00:42:25.311 | 99.99th=[ 1074] 00:42:25.311 bw ( KiB/s): min= 4096, max= 4096, per=44.74%, avg=4096.00, stdev= 0.00, samples=1 00:42:25.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:25.311 lat (usec) : 500=8.20%, 750=34.80%, 1000=25.65% 00:42:25.311 lat (msec) : 2=31.35% 00:42:25.311 cpu : usr=2.20%, sys=4.80%, ctx=1161, majf=0, minf=1 00:42:25.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 issued rwts: total=512,646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:25.311 job1: (groupid=0, jobs=1): err= 0: pid=3861086: Sat Oct 12 22:30:43 2024 00:42:25.311 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:25.311 slat (nsec): min=24392, max=44186, avg=25481.97, stdev=2620.82 00:42:25.311 clat (usec): min=773, max=1275, avg=1066.77, stdev=87.56 00:42:25.311 lat (usec): min=798, max=1300, avg=1092.26, stdev=87.67 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 816], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 1004], 00:42:25.311 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:42:25.311 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:42:25.311 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:42:25.311 | 99.99th=[ 1270] 00:42:25.311 write: IOPS=633, BW=2533KiB/s (2594kB/s)(2536KiB/1001msec); 0 zone resets 00:42:25.311 slat (nsec): min=9266, max=56628, avg=29238.65, stdev=8327.70 00:42:25.311 clat (usec): min=152, max=960, avg=651.89, stdev=140.76 00:42:25.311 lat (usec): min=163, max=1006, avg=681.12, stdev=143.49 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 306], 5.00th=[ 408], 10.00th=[ 469], 20.00th=[ 529], 00:42:25.311 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 701], 00:42:25.311 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 865], 00:42:25.311 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:42:25.311 | 99.99th=[ 963] 00:42:25.311 bw ( KiB/s): min= 4096, max= 4096, per=44.74%, avg=4096.00, stdev= 0.00, samples=1 00:42:25.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:25.311 lat (usec) : 250=0.17%, 500=7.68%, 750=32.11%, 1000=23.12% 00:42:25.311 lat (msec) : 2=36.91% 00:42:25.311 cpu : usr=2.20%, sys=2.80%, ctx=1146, majf=0, minf=2 00:42:25.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 issued rwts: total=512,634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:25.311 job2: (groupid=0, jobs=1): err= 0: pid=3861092: Sat Oct 12 22:30:43 2024 00:42:25.311 read: IOPS=20, BW=80.9KiB/s (82.9kB/s)(84.0KiB/1038msec) 00:42:25.311 slat (nsec): min=16776, max=29621, avg=26070.05, stdev=2291.09 00:42:25.311 clat (usec): min=837, max=41880, avg=37335.95, stdev=12133.07 00:42:25.311 lat (usec): min=865, max=41907, avg=37362.02, stdev=12132.08 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 840], 5.00th=[ 873], 10.00th=[40633], 20.00th=[41157], 00:42:25.311 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:25.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:25.311 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:42:25.311 | 99.99th=[41681] 00:42:25.311 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:42:25.311 slat (nsec): min=9407, max=65384, avg=30060.44, stdev=8133.92 00:42:25.311 clat (usec): min=130, max=678, avg=455.77, stdev=82.89 00:42:25.311 lat (usec): min=140, max=700, avg=485.83, stdev=84.42 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 269], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 375], 00:42:25.311 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 482], 00:42:25.311 | 70.00th=[ 498], 80.00th=[ 519], 90.00th=[ 553], 95.00th=[ 594], 00:42:25.311 | 99.00th=[ 668], 99.50th=[ 668], 99.90th=[ 676], 99.95th=[ 676], 00:42:25.311 | 99.99th=[ 676] 00:42:25.311 bw ( KiB/s): min= 4096, max= 4096, per=44.74%, avg=4096.00, stdev= 0.00, samples=1 00:42:25.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:25.311 lat (usec) : 250=0.75%, 500=68.11%, 750=27.20%, 1000=0.38% 00:42:25.311 lat (msec) : 50=3.56% 00:42:25.311 cpu : usr=0.77%, sys=1.45%, ctx=533, majf=0, minf=1 00:42:25.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:25.311 job3: (groupid=0, jobs=1): err= 0: pid=3861096: Sat Oct 12 22:30:43 2024 00:42:25.311 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:42:25.311 slat (nsec): min=26461, max=61508, avg=27532.95, stdev=3758.64 00:42:25.311 clat (usec): min=602, max=1290, avg=1068.30, stdev=71.50 00:42:25.311 lat (usec): min=629, max=1317, avg=1095.83, stdev=71.58 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 848], 5.00th=[ 947], 10.00th=[ 988], 20.00th=[ 1020], 00:42:25.311 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:42:25.311 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1172], 00:42:25.311 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1287], 00:42:25.311 | 99.99th=[ 1287] 00:42:25.311 write: IOPS=583, BW=2334KiB/s (2390kB/s)(2336KiB/1001msec); 0 zone resets 00:42:25.311 slat (nsec): min=9260, max=69707, avg=29675.36, stdev=9968.77 00:42:25.311 clat (usec): min=143, max=963, avg=707.41, stdev=132.00 00:42:25.311 lat (usec): min=177, max=1009, avg=737.08, stdev=136.72 00:42:25.311 clat percentiles (usec): 00:42:25.311 | 1.00th=[ 285], 5.00th=[ 429], 10.00th=[ 553], 20.00th=[ 627], 00:42:25.311 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 758], 00:42:25.311 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 881], 00:42:25.311 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:42:25.311 | 99.99th=[ 963] 00:42:25.311 bw ( KiB/s): min= 4096, max= 4096, per=44.74%, avg=4096.00, stdev= 0.00, samples=1 00:42:25.311 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:25.311 lat (usec) : 250=0.18%, 500=3.56%, 750=27.83%, 1000=27.10% 00:42:25.311 lat (msec) : 2=41.33% 00:42:25.311 cpu : usr=2.00%, sys=4.50%, ctx=1097, majf=0, minf=2 00:42:25.311 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:25.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:25.311 issued rwts: total=512,584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:25.311 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:25.311 00:42:25.311 Run status group 0 (all jobs): 00:42:25.311 READ: bw=6000KiB/s (6144kB/s), 80.9KiB/s-2046KiB/s (82.9kB/s-2095kB/s), io=6228KiB (6377kB), run=1001-1038msec 00:42:25.311 WRITE: bw=9156KiB/s (9376kB/s), 1973KiB/s-2581KiB/s (2020kB/s-2643kB/s), io=9504KiB (9732kB), run=1001-1038msec 00:42:25.311 00:42:25.311 Disk stats (read/write): 00:42:25.311 nvme0n1: ios=468/512, merge=0/0, ticks=1372/266, in_queue=1638, util=96.79% 00:42:25.311 nvme0n2: ios=485/512, merge=0/0, ticks=591/310, in_queue=901, util=96.64% 00:42:25.311 nvme0n3: ios=16/512, merge=0/0, ticks=579/227, in_queue=806, util=88.40% 00:42:25.311 nvme0n4: ios=412/512, merge=0/0, ticks=395/287, in_queue=682, util=89.53% 00:42:25.311 22:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:25.311 [global] 00:42:25.311 thread=1 00:42:25.311 invalidate=1 00:42:25.311 rw=write 00:42:25.311 time_based=1 00:42:25.311 runtime=1 00:42:25.311 ioengine=libaio 00:42:25.311 direct=1 00:42:25.311 bs=4096 00:42:25.311 iodepth=128 00:42:25.311 norandommap=0 00:42:25.311 numjobs=1 00:42:25.311 00:42:25.311 verify_dump=1 00:42:25.311 verify_backlog=512 00:42:25.311 verify_state_save=0 00:42:25.311 do_verify=1 00:42:25.311 verify=crc32c-intel 00:42:25.311 [job0] 00:42:25.311 filename=/dev/nvme0n1 00:42:25.311 [job1] 00:42:25.311 filename=/dev/nvme0n2 00:42:25.311 [job2] 00:42:25.311 filename=/dev/nvme0n3 00:42:25.311 [job3] 00:42:25.311 filename=/dev/nvme0n4 00:42:25.312 Could not set queue depth (nvme0n1) 00:42:25.312 Could not set queue depth (nvme0n2) 00:42:25.312 Could not set queue depth (nvme0n3) 00:42:25.312 Could not set queue depth (nvme0n4) 00:42:25.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:25.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:25.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:25.571 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:25.571 fio-3.35 00:42:25.571 Starting 4 threads 00:42:27.018 00:42:27.018 job0: (groupid=0, jobs=1): err= 0: pid=3861606: Sat Oct 12 22:30:45 2024 00:42:27.018 read: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.1MiB/1006msec) 00:42:27.018 slat (nsec): min=879, max=7547.2k, avg=54125.60, stdev=364097.13 00:42:27.018 clat (usec): min=1149, max=23193, avg=7347.30, stdev=2488.79 00:42:27.018 lat (usec): min=1157, max=23195, avg=7401.43, stdev=2501.09 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 1827], 5.00th=[ 4178], 10.00th=[ 4817], 20.00th=[ 5342], 00:42:27.018 | 30.00th=[ 5932], 40.00th=[ 6521], 50.00th=[ 7111], 60.00th=[ 7898], 00:42:27.018 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[10683], 00:42:27.018 | 99.00th=[17695], 99.50th=[20579], 99.90th=[23200], 99.95th=[23200], 00:42:27.018 | 99.99th=[23200] 00:42:27.018 write: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1006msec); 0 zone resets 00:42:27.018 slat (nsec): min=1528, max=9956.1k, avg=49942.21, stdev=292920.01 00:42:27.018 clat (usec): min=658, max=23185, avg=6887.25, stdev=2484.14 00:42:27.018 lat (usec): min=782, max=23199, avg=6937.19, stdev=2495.15 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 2278], 5.00th=[ 3687], 10.00th=[ 4359], 20.00th=[ 5276], 00:42:27.018 | 30.00th=[ 5669], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6783], 00:42:27.018 | 70.00th=[ 7701], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[11863], 00:42:27.018 | 99.00th=[15401], 99.50th=[15664], 99.90th=[19530], 99.95th=[22414], 00:42:27.018 | 99.99th=[23200] 00:42:27.018 bw ( KiB/s): min=30760, max=42160, per=40.91%, avg=36460.00, stdev=8061.02, samples=2 00:42:27.018 iops : min= 7690, max=10540, avg=9115.00, stdev=2015.25, samples=2 00:42:27.018 lat (usec) : 750=0.01%, 1000=0.09% 00:42:27.018 lat (msec) : 2=0.83%, 4=4.60%, 10=86.26%, 20=7.82%, 50=0.38% 00:42:27.018 cpu : usr=4.18%, sys=7.06%, ctx=847, majf=0, minf=1 00:42:27.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:42:27.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:27.018 issued rwts: total=8730,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:27.018 job1: (groupid=0, jobs=1): err= 0: pid=3861607: Sat Oct 12 22:30:45 2024 00:42:27.018 read: IOPS=3224, BW=12.6MiB/s (13.2MB/s)(13.2MiB/1045msec) 00:42:27.018 slat (nsec): min=881, max=14429k, avg=116819.64, stdev=775201.41 00:42:27.018 clat (usec): min=5973, max=76584, avg=14325.06, stdev=10340.81 00:42:27.018 lat (usec): min=5981, max=76590, avg=14441.88, stdev=10423.05 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[ 8291], 20.00th=[ 8586], 00:42:27.018 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10814], 00:42:27.018 | 70.00th=[13829], 80.00th=[19792], 90.00th=[21365], 95.00th=[44827], 00:42:27.018 | 99.00th=[52691], 99.50th=[63701], 99.90th=[77071], 99.95th=[77071], 00:42:27.018 | 99.99th=[77071] 00:42:27.018 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:42:27.018 slat (nsec): min=1607, max=15990k, avg=164652.88, stdev=881139.74 00:42:27.018 clat (usec): min=7456, max=99151, avg=23270.07, stdev=13045.41 00:42:27.018 lat (usec): min=7464, max=99157, avg=23434.73, stdev=13109.17 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[10028], 5.00th=[13173], 10.00th=[14091], 20.00th=[14615], 00:42:27.018 | 30.00th=[15139], 40.00th=[17171], 50.00th=[19530], 60.00th=[20841], 00:42:27.018 | 70.00th=[23725], 80.00th=[30278], 90.00th=[39060], 95.00th=[45351], 00:42:27.018 | 99.00th=[82314], 99.50th=[91751], 99.90th=[99091], 99.95th=[99091], 00:42:27.018 | 99.99th=[99091] 00:42:27.018 bw ( KiB/s): min=12288, max=16384, per=16.08%, avg=14336.00, stdev=2896.31, samples=2 00:42:27.018 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:42:27.018 lat (msec) : 10=25.51%, 20=41.85%, 50=29.87%, 100=2.78% 00:42:27.018 cpu : usr=1.92%, sys=3.74%, ctx=429, majf=0, minf=1 00:42:27.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:27.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:27.018 issued rwts: total=3370,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:27.018 job2: (groupid=0, jobs=1): err= 0: pid=3861610: Sat Oct 12 22:30:45 2024 00:42:27.018 read: IOPS=5734, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec) 00:42:27.018 slat (nsec): min=918, max=14734k, avg=74676.24, stdev=504553.73 00:42:27.018 clat (usec): min=1237, max=39660, avg=9606.05, stdev=4427.54 00:42:27.018 lat (usec): min=1955, max=39666, avg=9680.73, stdev=4455.27 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 3982], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7439], 00:42:27.018 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:42:27.018 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[13435], 95.00th=[14222], 00:42:27.018 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:42:27.018 | 99.99th=[39584] 00:42:27.018 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:42:27.018 slat (nsec): min=1561, max=10746k, avg=86903.69, stdev=554615.26 00:42:27.018 clat (usec): min=701, max=95433, avg=11723.25, stdev=14210.83 00:42:27.018 lat (usec): min=709, max=95442, avg=11810.16, stdev=14307.83 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 1729], 5.00th=[ 5211], 10.00th=[ 6521], 20.00th=[ 7177], 00:42:27.018 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8455], 00:42:27.018 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[14222], 95.00th=[28181], 00:42:27.018 | 99.00th=[89654], 99.50th=[92799], 99.90th=[94897], 99.95th=[95945], 00:42:27.018 | 99.99th=[95945] 00:42:27.018 bw ( KiB/s): min=16776, max=32312, per=27.54%, avg=24544.00, stdev=10985.61, samples=2 00:42:27.018 iops : min= 4194, max= 8078, avg=6136.00, stdev=2746.40, samples=2 00:42:27.018 lat (usec) : 750=0.03%, 1000=0.03% 00:42:27.018 lat (msec) : 2=0.73%, 4=1.73%, 10=72.43%, 20=19.44%, 50=3.68% 00:42:27.018 lat (msec) : 100=1.93% 00:42:27.018 cpu : usr=4.09%, sys=4.79%, ctx=565, majf=0, minf=2 00:42:27.018 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:42:27.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:27.018 issued rwts: total=5752,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:27.018 job3: (groupid=0, jobs=1): err= 0: pid=3861615: Sat Oct 12 22:30:45 2024 00:42:27.018 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:42:27.018 slat (nsec): min=993, max=14565k, avg=111790.67, stdev=764112.46 00:42:27.018 clat (usec): min=3206, max=57987, avg=12744.28, stdev=7195.36 00:42:27.018 lat (usec): min=3215, max=57996, avg=12856.07, stdev=7260.95 00:42:27.018 clat percentiles (usec): 00:42:27.018 | 1.00th=[ 5276], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[ 8717], 00:42:27.018 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:42:27.018 | 70.00th=[13829], 80.00th=[16057], 90.00th=[18744], 95.00th=[22152], 00:42:27.018 | 99.00th=[53740], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:42:27.018 | 99.99th=[57934] 00:42:27.019 write: IOPS=4303, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec); 0 zone resets 00:42:27.019 slat (nsec): min=1654, max=6821.6k, avg=120185.97, stdev=528106.51 00:42:27.019 clat (usec): min=1308, max=67748, avg=17444.05, stdev=10876.59 00:42:27.019 lat (usec): min=1318, max=67757, avg=17564.24, stdev=10932.19 00:42:27.019 clat percentiles (usec): 00:42:27.019 | 1.00th=[ 3884], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 8291], 00:42:27.019 | 30.00th=[11731], 40.00th=[14222], 50.00th=[14877], 60.00th=[16581], 00:42:27.019 | 70.00th=[20317], 80.00th=[22676], 90.00th=[28967], 95.00th=[41157], 00:42:27.019 | 99.00th=[60556], 99.50th=[63177], 99.90th=[67634], 99.95th=[67634], 00:42:27.019 | 99.99th=[67634] 00:42:27.019 bw ( KiB/s): min=16384, max=17328, per=18.91%, avg=16856.00, stdev=667.51, samples=2 00:42:27.019 iops : min= 4096, max= 4332, avg=4214.00, stdev=166.88, samples=2 00:42:27.019 lat (msec) : 2=0.02%, 4=0.71%, 10=29.85%, 20=49.69%, 50=17.58% 00:42:27.019 lat (msec) : 100=2.15% 00:42:27.019 cpu : usr=3.67%, sys=3.67%, ctx=490, majf=0, minf=1 00:42:27.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:27.019 issued rwts: total=4096,4342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:27.019 00:42:27.019 Run status group 0 (all jobs): 00:42:27.019 READ: bw=82.0MiB/s (86.0MB/s), 12.6MiB/s-33.9MiB/s (13.2MB/s-35.5MB/s), io=85.7MiB (89.9MB), run=1003-1045msec 00:42:27.019 WRITE: bw=87.0MiB/s (91.3MB/s), 13.4MiB/s-35.8MiB/s (14.0MB/s-37.5MB/s), io=91.0MiB (95.4MB), run=1003-1045msec 00:42:27.019 00:42:27.019 Disk stats (read/write): 00:42:27.019 nvme0n1: ios=7730/7866, merge=0/0, ticks=40396/39019, in_queue=79415, util=88.08% 00:42:27.019 nvme0n2: ios=2602/2960, merge=0/0, ticks=18239/33390, in_queue=51629, util=89.00% 00:42:27.019 nvme0n3: ios=4614/4743, merge=0/0, ticks=26593/44779, in_queue=71372, util=88.61% 00:42:27.019 nvme0n4: ios=3584/3855, merge=0/0, ticks=41455/61618, in_queue=103073, util=89.62% 00:42:27.019 22:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:27.019 [global] 00:42:27.019 thread=1 00:42:27.019 invalidate=1 00:42:27.019 rw=randwrite 00:42:27.019 time_based=1 00:42:27.019 runtime=1 00:42:27.019 ioengine=libaio 00:42:27.019 direct=1 00:42:27.019 bs=4096 00:42:27.019 iodepth=128 00:42:27.019 norandommap=0 00:42:27.019 numjobs=1 00:42:27.019 00:42:27.019 verify_dump=1 00:42:27.019 verify_backlog=512 00:42:27.019 verify_state_save=0 00:42:27.019 do_verify=1 00:42:27.019 verify=crc32c-intel 00:42:27.019 [job0] 00:42:27.019 filename=/dev/nvme0n1 00:42:27.019 [job1] 00:42:27.019 filename=/dev/nvme0n2 00:42:27.019 [job2] 00:42:27.019 filename=/dev/nvme0n3 00:42:27.019 [job3] 00:42:27.019 filename=/dev/nvme0n4 00:42:27.019 Could not set queue depth (nvme0n1) 00:42:27.019 Could not set queue depth (nvme0n2) 00:42:27.019 Could not set queue depth (nvme0n3) 00:42:27.019 Could not set queue depth (nvme0n4) 00:42:27.019 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:27.019 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:27.019 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:27.019 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:27.019 fio-3.35 00:42:27.019 Starting 4 threads 00:42:28.432 00:42:28.432 job0: (groupid=0, jobs=1): err= 0: pid=3862130: Sat Oct 12 22:30:46 2024 00:42:28.432 read: IOPS=6078, BW=23.7MiB/s (24.9MB/s)(23.9MiB/1007msec) 00:42:28.432 slat (nsec): min=940, max=10518k, avg=78015.65, stdev=563741.91 00:42:28.432 clat (usec): min=1893, max=27490, avg=9747.61, stdev=3744.90 00:42:28.432 lat (usec): min=2306, max=27493, avg=9825.62, stdev=3783.65 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 3851], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 6980], 00:42:28.432 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:42:28.432 | 70.00th=[11338], 80.00th=[12518], 90.00th=[14091], 95.00th=[16909], 00:42:28.432 | 99.00th=[22414], 99.50th=[24249], 99.90th=[26870], 99.95th=[27395], 00:42:28.432 | 99.99th=[27395] 00:42:28.432 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:42:28.432 slat (nsec): min=1582, max=10312k, avg=79886.60, stdev=436321.37 00:42:28.432 clat (usec): min=495, max=60570, avg=11076.15, stdev=8478.25 00:42:28.432 lat (usec): min=527, max=60814, avg=11156.03, stdev=8527.32 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 2409], 5.00th=[ 4178], 10.00th=[ 5669], 20.00th=[ 6980], 00:42:28.432 | 30.00th=[ 7177], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8455], 00:42:28.432 | 70.00th=[11207], 80.00th=[13829], 90.00th=[19530], 95.00th=[31065], 00:42:28.432 | 99.00th=[48497], 99.50th=[56361], 99.90th=[60556], 99.95th=[60556], 00:42:28.432 | 99.99th=[60556] 00:42:28.432 bw ( KiB/s): min=22000, max=27152, per=24.24%, avg=24576.00, stdev=3643.01, samples=2 00:42:28.432 iops : min= 5500, max= 6788, avg=6144.00, stdev=910.75, samples=2 00:42:28.432 lat (usec) : 500=0.01%, 750=0.02% 00:42:28.432 lat (msec) : 2=0.27%, 4=2.40%, 10=63.56%, 20=28.10%, 50=5.26% 00:42:28.432 lat (msec) : 100=0.38% 00:42:28.432 cpu : usr=4.27%, sys=5.86%, ctx=724, majf=0, minf=2 00:42:28.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:42:28.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:28.432 issued rwts: total=6121,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:28.432 job1: (groupid=0, jobs=1): err= 0: pid=3862132: Sat Oct 12 22:30:46 2024 00:42:28.432 read: IOPS=6425, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1003msec) 00:42:28.432 slat (nsec): min=889, max=12841k, avg=71794.87, stdev=502879.66 00:42:28.432 clat (usec): min=1079, max=49498, avg=9842.15, stdev=6009.35 00:42:28.432 lat (usec): min=1768, max=51753, avg=9913.94, stdev=6051.66 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 3818], 5.00th=[ 5080], 10.00th=[ 6194], 20.00th=[ 6652], 00:42:28.432 | 30.00th=[ 7242], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8586], 00:42:28.432 | 70.00th=[ 9241], 80.00th=[11600], 90.00th=[14484], 95.00th=[21365], 00:42:28.432 | 99.00th=[40633], 99.50th=[42730], 99.90th=[49546], 99.95th=[49546], 00:42:28.432 | 99.99th=[49546] 00:42:28.432 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:42:28.432 slat (nsec): min=1482, max=15988k, avg=69295.45, stdev=482750.78 00:42:28.432 clat (usec): min=791, max=59683, avg=9559.29, stdev=6725.23 00:42:28.432 lat (usec): min=956, max=60569, avg=9628.59, stdev=6769.12 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 6325], 00:42:28.432 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7963], 00:42:28.432 | 70.00th=[ 8979], 80.00th=[11469], 90.00th=[14091], 95.00th=[17957], 00:42:28.432 | 99.00th=[47973], 99.50th=[53216], 99.90th=[59507], 99.95th=[59507], 00:42:28.432 | 99.99th=[59507] 00:42:28.432 bw ( KiB/s): min=22632, max=30616, per=26.26%, avg=26624.00, stdev=5645.54, samples=2 00:42:28.432 iops : min= 5658, max= 7654, avg=6656.00, stdev=1411.39, samples=2 00:42:28.432 lat (usec) : 1000=0.01% 00:42:28.432 lat (msec) : 2=0.26%, 4=1.85%, 10=71.79%, 20=20.96%, 50=4.72% 00:42:28.432 lat (msec) : 100=0.41% 00:42:28.432 cpu : usr=4.89%, sys=5.79%, ctx=595, majf=0, minf=1 00:42:28.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:42:28.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:28.432 issued rwts: total=6445,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:28.432 job2: (groupid=0, jobs=1): err= 0: pid=3862135: Sat Oct 12 22:30:46 2024 00:42:28.432 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:42:28.432 slat (nsec): min=933, max=17046k, avg=68009.69, stdev=540088.34 00:42:28.432 clat (usec): min=1031, max=34049, avg=9242.50, stdev=3246.47 00:42:28.432 lat (usec): min=1037, max=34060, avg=9310.51, stdev=3292.86 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 1663], 5.00th=[ 3818], 10.00th=[ 6456], 20.00th=[ 7570], 00:42:28.432 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9503], 00:42:28.432 | 70.00th=[10028], 80.00th=[10683], 90.00th=[12911], 95.00th=[16319], 00:42:28.432 | 99.00th=[19530], 99.50th=[19530], 99.90th=[23725], 99.95th=[23725], 00:42:28.432 | 99.99th=[33817] 00:42:28.432 write: IOPS=6424, BW=25.1MiB/s (26.3MB/s)(25.3MiB/1007msec); 0 zone resets 00:42:28.432 slat (nsec): min=1533, max=9693.7k, avg=79555.13, stdev=559416.51 00:42:28.432 clat (usec): min=547, max=68908, avg=10982.94, stdev=10107.57 00:42:28.432 lat (usec): min=1031, max=68916, avg=11062.49, stdev=10174.60 00:42:28.432 clat percentiles (usec): 00:42:28.432 | 1.00th=[ 3785], 5.00th=[ 5145], 10.00th=[ 5932], 20.00th=[ 7701], 00:42:28.432 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:42:28.432 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11469], 95.00th=[25560], 00:42:28.432 | 99.00th=[62129], 99.50th=[64750], 99.90th=[67634], 99.95th=[68682], 00:42:28.432 | 99.99th=[68682] 00:42:28.432 bw ( KiB/s): min=24576, max=26152, per=25.02%, avg=25364.00, stdev=1114.40, samples=2 00:42:28.432 iops : min= 6144, max= 6538, avg=6341.00, stdev=278.60, samples=2 00:42:28.432 lat (usec) : 750=0.01%, 1000=0.01% 00:42:28.432 lat (msec) : 2=1.10%, 4=2.30%, 10=68.72%, 20=25.06%, 50=1.24% 00:42:28.432 lat (msec) : 100=1.55% 00:42:28.432 cpu : usr=4.77%, sys=6.16%, ctx=421, majf=0, minf=1 00:42:28.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:42:28.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:28.432 issued rwts: total=6144,6469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:28.432 job3: (groupid=0, jobs=1): err= 0: pid=3862136: Sat Oct 12 22:30:46 2024 00:42:28.432 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec) 00:42:28.432 slat (nsec): min=951, max=10110k, avg=80488.93, stdev=621314.98 00:42:28.432 clat (usec): min=3385, max=30816, avg=10182.72, stdev=3339.02 00:42:28.432 lat (usec): min=3388, max=30818, avg=10263.21, stdev=3396.80 00:42:28.432 clat percentiles (usec): 00:42:28.433 | 1.00th=[ 5014], 5.00th=[ 5997], 10.00th=[ 7177], 20.00th=[ 7898], 00:42:28.433 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10028], 00:42:28.433 | 70.00th=[10552], 80.00th=[11994], 90.00th=[14615], 95.00th=[17171], 00:42:28.433 | 99.00th=[21103], 99.50th=[21103], 99.90th=[29492], 99.95th=[30802], 00:42:28.433 | 99.99th=[30802] 00:42:28.433 write: IOPS=6306, BW=24.6MiB/s (25.8MB/s)(24.9MiB/1012msec); 0 zone resets 00:42:28.433 slat (nsec): min=1614, max=8976.9k, avg=74024.84, stdev=501575.99 00:42:28.433 clat (usec): min=1200, max=77508, avg=10312.67, stdev=7976.24 00:42:28.433 lat (usec): min=1209, max=77516, avg=10386.69, stdev=8019.16 00:42:28.433 clat percentiles (usec): 00:42:28.433 | 1.00th=[ 3294], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6980], 00:42:28.433 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 9110], 00:42:28.433 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[12649], 95.00th=[23987], 00:42:28.433 | 99.00th=[52167], 99.50th=[64226], 99.90th=[69731], 99.95th=[70779], 00:42:28.433 | 99.99th=[77071] 00:42:28.433 bw ( KiB/s): min=20032, max=30000, per=24.67%, avg=25016.00, stdev=7048.44, samples=2 00:42:28.433 iops : min= 5008, max= 7500, avg=6254.00, stdev=1762.11, samples=2 00:42:28.433 lat (msec) : 2=0.13%, 4=0.82%, 10=67.64%, 20=26.66%, 50=4.12% 00:42:28.433 lat (msec) : 100=0.63% 00:42:28.433 cpu : usr=4.55%, sys=6.43%, ctx=506, majf=0, minf=1 00:42:28.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:42:28.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:28.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:28.433 issued rwts: total=6144,6382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:28.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:28.433 00:42:28.433 Run status group 0 (all jobs): 00:42:28.433 READ: bw=95.9MiB/s (101MB/s), 23.7MiB/s-25.1MiB/s (24.9MB/s-26.3MB/s), io=97.1MiB (102MB), run=1003-1012msec 00:42:28.433 WRITE: bw=99.0MiB/s (104MB/s), 23.8MiB/s-25.9MiB/s (25.0MB/s-27.2MB/s), io=100MiB (105MB), run=1003-1012msec 00:42:28.433 00:42:28.433 Disk stats (read/write): 00:42:28.433 nvme0n1: ios=4658/4871, merge=0/0, ticks=44801/57156, in_queue=101957, util=86.77% 00:42:28.433 nvme0n2: ios=5157/5223, merge=0/0, ticks=39409/40529, in_queue=79938, util=87.45% 00:42:28.433 nvme0n3: ios=5013/5120, merge=0/0, ticks=32588/45094, in_queue=77682, util=96.19% 00:42:28.433 nvme0n4: ios=5600/5632, merge=0/0, ticks=54495/46892, in_queue=101387, util=96.25% 00:42:28.433 22:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:28.433 22:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3862470 00:42:28.433 22:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:28.433 22:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:28.433 [global] 00:42:28.433 thread=1 00:42:28.433 invalidate=1 00:42:28.433 rw=read 00:42:28.433 time_based=1 00:42:28.433 runtime=10 00:42:28.433 ioengine=libaio 00:42:28.433 direct=1 00:42:28.433 bs=4096 00:42:28.433 iodepth=1 00:42:28.433 norandommap=1 00:42:28.433 numjobs=1 00:42:28.433 00:42:28.433 [job0] 00:42:28.433 filename=/dev/nvme0n1 00:42:28.433 [job1] 00:42:28.433 filename=/dev/nvme0n2 00:42:28.433 [job2] 00:42:28.433 filename=/dev/nvme0n3 00:42:28.433 [job3] 00:42:28.433 filename=/dev/nvme0n4 00:42:28.433 Could not set queue depth (nvme0n1) 00:42:28.433 Could not set queue depth (nvme0n2) 00:42:28.433 Could not set queue depth (nvme0n3) 00:42:28.433 Could not set queue depth (nvme0n4) 00:42:28.694 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:28.694 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:28.694 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:28.694 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:28.694 fio-3.35 00:42:28.694 Starting 4 threads 00:42:31.246 22:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:31.506 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9928704, buflen=4096 00:42:31.506 fio: pid=3862666, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:31.506 22:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:31.768 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9674752, buflen=4096 00:42:31.768 fio: pid=3862665, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:31.768 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:31.768 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:32.029 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3751936, buflen=4096 00:42:32.029 fio: pid=3862662, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:32.029 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.029 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:32.029 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11161600, buflen=4096 00:42:32.029 fio: pid=3862663, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:32.029 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.029 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:32.290 00:42:32.290 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3862662: Sat Oct 12 22:30:50 2024 00:42:32.290 read: IOPS=306, BW=1223KiB/s (1253kB/s)(3664KiB/2995msec) 00:42:32.290 slat (usec): min=16, max=16571, avg=61.67, stdev=748.22 00:42:32.290 clat (usec): min=736, max=42309, avg=3176.90, stdev=8666.68 00:42:32.290 lat (usec): min=762, max=42337, avg=3238.61, stdev=8690.85 00:42:32.290 clat percentiles (usec): 00:42:32.290 | 1.00th=[ 938], 5.00th=[ 1029], 10.00th=[ 1090], 20.00th=[ 1139], 00:42:32.290 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1221], 60.00th=[ 1237], 00:42:32.290 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1369], 95.00th=[ 1483], 00:42:32.290 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:32.290 | 99.99th=[42206] 00:42:32.290 bw ( KiB/s): min= 888, max= 1728, per=11.85%, avg=1259.20, stdev=308.84, samples=5 00:42:32.290 iops : min= 222, max= 432, avg=314.80, stdev=77.21, samples=5 00:42:32.290 lat (usec) : 750=0.11%, 1000=2.73% 00:42:32.290 lat (msec) : 2=92.15%, 50=4.91% 00:42:32.290 cpu : usr=0.70%, sys=1.07%, ctx=920, majf=0, minf=2 00:42:32.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 issued rwts: total=917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:32.290 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3862663: Sat Oct 12 22:30:50 2024 00:42:32.290 read: IOPS=859, BW=3435KiB/s (3518kB/s)(10.6MiB/3173msec) 00:42:32.290 slat (usec): min=6, max=26695, avg=54.95, stdev=770.93 00:42:32.290 clat (usec): min=488, max=41373, avg=1094.97, stdev=1407.42 00:42:32.290 lat (usec): min=518, max=44921, avg=1149.93, stdev=1637.01 00:42:32.290 clat percentiles (usec): 00:42:32.290 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 955], 00:42:32.290 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:42:32.290 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:42:32.290 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[41157], 99.95th=[41157], 00:42:32.290 | 99.99th=[41157] 00:42:32.290 bw ( KiB/s): min= 2476, max= 3704, per=32.70%, avg=3474.00, stdev=489.45, samples=6 00:42:32.290 iops : min= 619, max= 926, avg=868.50, stdev=122.36, samples=6 00:42:32.290 lat (usec) : 500=0.04%, 750=1.65%, 1000=29.35% 00:42:32.290 lat (msec) : 2=68.71%, 4=0.04%, 10=0.04%, 50=0.15% 00:42:32.290 cpu : usr=1.01%, sys=2.49%, ctx=2732, majf=0, minf=1 00:42:32.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 issued rwts: total=2726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:32.290 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3862665: Sat Oct 12 22:30:50 2024 00:42:32.290 read: IOPS=843, BW=3374KiB/s (3455kB/s)(9448KiB/2800msec) 00:42:32.290 slat (usec): min=6, max=14939, avg=38.63, stdev=412.35 00:42:32.290 clat (usec): min=325, max=41363, avg=1131.63, stdev=2183.64 00:42:32.290 lat (usec): min=356, max=41394, avg=1170.26, stdev=2221.17 00:42:32.290 clat percentiles (usec): 00:42:32.290 | 1.00th=[ 619], 5.00th=[ 816], 10.00th=[ 881], 20.00th=[ 947], 00:42:32.290 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:42:32.290 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:42:32.290 | 99.00th=[ 1237], 99.50th=[ 1287], 99.90th=[41157], 99.95th=[41157], 00:42:32.290 | 99.99th=[41157] 00:42:32.290 bw ( KiB/s): min= 2568, max= 3840, per=31.56%, avg=3353.60, stdev=620.82, samples=5 00:42:32.290 iops : min= 642, max= 960, avg=838.40, stdev=155.21, samples=5 00:42:32.290 lat (usec) : 500=0.47%, 750=1.95%, 1000=34.74% 00:42:32.290 lat (msec) : 2=62.46%, 4=0.04%, 50=0.30% 00:42:32.290 cpu : usr=1.04%, sys=3.79%, ctx=2365, majf=0, minf=2 00:42:32.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:32.290 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3862666: Sat Oct 12 22:30:50 2024 00:42:32.290 read: IOPS=929, BW=3715KiB/s (3804kB/s)(9696KiB/2610msec) 00:42:32.290 slat (nsec): min=25615, max=61245, avg=27342.71, stdev=2937.38 00:42:32.290 clat (usec): min=566, max=1327, avg=1032.00, stdev=84.70 00:42:32.290 lat (usec): min=592, max=1354, avg=1059.35, stdev=84.56 00:42:32.290 clat percentiles (usec): 00:42:32.290 | 1.00th=[ 791], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 971], 00:42:32.290 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:42:32.290 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:42:32.290 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1303], 00:42:32.290 | 99.99th=[ 1336] 00:42:32.290 bw ( KiB/s): min= 3736, max= 3776, per=35.39%, avg=3760.00, stdev=16.00, samples=5 00:42:32.290 iops : min= 934, max= 944, avg=940.00, stdev= 4.00, samples=5 00:42:32.290 lat (usec) : 750=0.29%, 1000=31.96% 00:42:32.290 lat (msec) : 2=67.71% 00:42:32.290 cpu : usr=1.80%, sys=3.68%, ctx=2425, majf=0, minf=2 00:42:32.290 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.290 issued rwts: total=2425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.290 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:32.290 00:42:32.290 Run status group 0 (all jobs): 00:42:32.291 READ: bw=10.4MiB/s (10.9MB/s), 1223KiB/s-3715KiB/s (1253kB/s-3804kB/s), io=32.9MiB (34.5MB), run=2610-3173msec 00:42:32.291 00:42:32.291 Disk stats (read/write): 00:42:32.291 nvme0n1: ios=898/0, merge=0/0, ticks=2660/0, in_queue=2660, util=94.23% 00:42:32.291 nvme0n2: ios=2657/0, merge=0/0, ticks=2846/0, in_queue=2846, util=93.31% 00:42:32.291 nvme0n3: ios=2174/0, merge=0/0, ticks=2346/0, in_queue=2346, util=96.03% 00:42:32.291 nvme0n4: ios=2424/0, merge=0/0, ticks=2305/0, in_queue=2305, util=96.46% 00:42:32.291 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.291 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:32.552 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.552 22:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:32.813 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.813 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:32.813 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:32.813 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3862470 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:33.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:33.074 nvmf hotplug test: fio failed as expected 00:42:33.074 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:33.335 rmmod nvme_tcp 00:42:33.335 rmmod nvme_fabrics 00:42:33.335 rmmod nvme_keyring 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3859256 ']' 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3859256 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3859256 ']' 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3859256 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:33.335 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3859256 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3859256' 00:42:33.596 killing process with pid 3859256 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3859256 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3859256 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:33.596 22:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:42:33.596 22:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.596 22:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:33.596 22:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.596 22:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:33.596 22:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:36.142 00:42:36.142 real 0m28.035s 00:42:36.142 user 2m20.701s 00:42:36.142 sys 0m12.082s 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.142 ************************************ 00:42:36.142 END TEST nvmf_fio_target 00:42:36.142 ************************************ 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:36.142 ************************************ 00:42:36.142 START TEST nvmf_bdevio 00:42:36.142 ************************************ 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:36.142 * Looking for test storage... 00:42:36.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.142 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:36.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.143 --rc genhtml_branch_coverage=1 00:42:36.143 --rc genhtml_function_coverage=1 00:42:36.143 --rc genhtml_legend=1 00:42:36.143 --rc geninfo_all_blocks=1 00:42:36.143 --rc geninfo_unexecuted_blocks=1 00:42:36.143 00:42:36.143 ' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:36.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.143 --rc genhtml_branch_coverage=1 00:42:36.143 --rc genhtml_function_coverage=1 00:42:36.143 --rc genhtml_legend=1 00:42:36.143 --rc geninfo_all_blocks=1 00:42:36.143 --rc geninfo_unexecuted_blocks=1 00:42:36.143 00:42:36.143 ' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:36.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.143 --rc genhtml_branch_coverage=1 00:42:36.143 --rc genhtml_function_coverage=1 00:42:36.143 --rc genhtml_legend=1 00:42:36.143 --rc geninfo_all_blocks=1 00:42:36.143 --rc geninfo_unexecuted_blocks=1 00:42:36.143 00:42:36.143 ' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:36.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.143 --rc genhtml_branch_coverage=1 00:42:36.143 --rc genhtml_function_coverage=1 00:42:36.143 --rc genhtml_legend=1 00:42:36.143 --rc geninfo_all_blocks=1 00:42:36.143 --rc geninfo_unexecuted_blocks=1 00:42:36.143 00:42:36.143 ' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:42:36.143 22:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:44.304 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:44.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:44.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:44.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:44.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:44.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:44.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:42:44.305 00:42:44.305 --- 10.0.0.2 ping statistics --- 00:42:44.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.305 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:44.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:44.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:42:44.305 00:42:44.305 --- 10.0.0.1 ping statistics --- 00:42:44.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.305 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3867687 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3867687 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3867687 ']' 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:44.305 22:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.305 [2024-10-12 22:31:01.645414] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:44.305 [2024-10-12 22:31:01.646369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:44.306 [2024-10-12 22:31:01.646405] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:44.306 [2024-10-12 22:31:01.729359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:44.306 [2024-10-12 22:31:01.761805] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:44.306 [2024-10-12 22:31:01.761839] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:44.306 [2024-10-12 22:31:01.761847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:44.306 [2024-10-12 22:31:01.761854] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:44.306 [2024-10-12 22:31:01.761859] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:44.306 [2024-10-12 22:31:01.761999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:42:44.306 [2024-10-12 22:31:01.762116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:42:44.306 [2024-10-12 22:31:01.762251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:42:44.306 [2024-10-12 22:31:01.762252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:42:44.306 [2024-10-12 22:31:01.821638] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:44.306 [2024-10-12 22:31:01.822887] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:44.306 [2024-10-12 22:31:01.823135] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:44.306 [2024-10-12 22:31:01.823678] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:44.306 [2024-10-12 22:31:01.823724] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 [2024-10-12 22:31:02.475052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 Malloc0 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:44.306 [2024-10-12 22:31:02.551307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:42:44.306 { 00:42:44.306 "params": { 00:42:44.306 "name": "Nvme$subsystem", 00:42:44.306 "trtype": "$TEST_TRANSPORT", 00:42:44.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.306 "adrfam": "ipv4", 00:42:44.306 "trsvcid": "$NVMF_PORT", 00:42:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.306 "hdgst": ${hdgst:-false}, 00:42:44.306 "ddgst": ${ddgst:-false} 00:42:44.306 }, 00:42:44.306 "method": "bdev_nvme_attach_controller" 00:42:44.306 } 00:42:44.306 EOF 00:42:44.306 )") 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:42:44.306 22:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:42:44.306 "params": { 00:42:44.306 "name": "Nvme1", 00:42:44.306 "trtype": "tcp", 00:42:44.306 "traddr": "10.0.0.2", 00:42:44.306 "adrfam": "ipv4", 00:42:44.306 "trsvcid": "4420", 00:42:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:44.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:44.306 "hdgst": false, 00:42:44.306 "ddgst": false 00:42:44.306 }, 00:42:44.306 "method": "bdev_nvme_attach_controller" 00:42:44.306 }' 00:42:44.306 [2024-10-12 22:31:02.615999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:44.306 [2024-10-12 22:31:02.616047] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867798 ] 00:42:44.306 [2024-10-12 22:31:02.692770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:44.306 [2024-10-12 22:31:02.731783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:44.306 [2024-10-12 22:31:02.731941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.306 [2024-10-12 22:31:02.731941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:42:44.568 I/O targets: 00:42:44.568 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:44.568 00:42:44.568 00:42:44.568 CUnit - A unit testing framework for C - Version 2.1-3 00:42:44.568 http://cunit.sourceforge.net/ 00:42:44.568 00:42:44.568 00:42:44.568 Suite: bdevio tests on: Nvme1n1 00:42:44.568 Test: blockdev write read block ...passed 00:42:44.568 Test: blockdev write zeroes read block ...passed 00:42:44.568 Test: blockdev write zeroes read no split ...passed 00:42:44.568 Test: blockdev write zeroes read split ...passed 00:42:44.568 Test: blockdev write zeroes read split partial ...passed 00:42:44.568 Test: blockdev reset ...[2024-10-12 22:31:03.030144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:42:44.568 [2024-10-12 22:31:03.030244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c1c50 (9): Bad file descriptor 00:42:44.568 [2024-10-12 22:31:03.036965] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:44.568 passed 00:42:44.568 Test: blockdev write read 8 blocks ...passed 00:42:44.568 Test: blockdev write read size > 128k ...passed 00:42:44.568 Test: blockdev write read invalid size ...passed 00:42:44.829 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:44.829 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:44.829 Test: blockdev write read max offset ...passed 00:42:44.829 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:44.829 Test: blockdev writev readv 8 blocks ...passed 00:42:44.829 Test: blockdev writev readv 30 x 1block ...passed 00:42:44.829 Test: blockdev writev readv block ...passed 00:42:44.829 Test: blockdev writev readv size > 128k ...passed 00:42:44.829 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:44.829 Test: blockdev comparev and writev ...[2024-10-12 22:31:03.264688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.829 [2024-10-12 22:31:03.264749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:44.829 [2024-10-12 22:31:03.264767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.264776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.265388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.265402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.265416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.265424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.266049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.266060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.266073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.266081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.266725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.266736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:44.830 [2024-10-12 22:31:03.266750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:44.830 [2024-10-12 22:31:03.266757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:44.830 passed 00:42:45.091 Test: blockdev nvme passthru rw ...passed 00:42:45.091 Test: blockdev nvme passthru vendor specific ...[2024-10-12 22:31:03.351978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:45.091 [2024-10-12 22:31:03.351999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:45.091 [2024-10-12 22:31:03.352443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:45.091 [2024-10-12 22:31:03.352456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:45.091 [2024-10-12 22:31:03.352838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:45.091 [2024-10-12 22:31:03.352849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:45.091 [2024-10-12 22:31:03.353233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:45.091 [2024-10-12 22:31:03.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:45.091 passed 00:42:45.091 Test: blockdev nvme admin passthru ...passed 00:42:45.091 Test: blockdev copy ...passed 00:42:45.091 00:42:45.091 Run Summary: Type Total Ran Passed Failed Inactive 00:42:45.091 suites 1 1 n/a 0 0 00:42:45.091 tests 23 23 23 0 0 00:42:45.091 asserts 152 152 152 0 n/a 00:42:45.091 00:42:45.091 Elapsed time = 1.009 seconds 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:45.091 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:45.352 rmmod nvme_tcp 00:42:45.352 rmmod nvme_fabrics 00:42:45.352 rmmod nvme_keyring 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3867687 ']' 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3867687 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3867687 ']' 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3867687 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3867687 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3867687' 00:42:45.352 killing process with pid 3867687 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3867687 00:42:45.352 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3867687 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:45.613 22:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:47.526 22:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:47.526 00:42:47.526 real 0m11.827s 00:42:47.526 user 0m8.882s 00:42:47.526 sys 0m6.213s 00:42:47.526 22:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:47.526 22:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:47.526 ************************************ 00:42:47.526 END TEST nvmf_bdevio 00:42:47.526 ************************************ 00:42:47.787 22:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:47.787 00:42:47.787 real 4m56.448s 00:42:47.787 user 10m16.500s 00:42:47.787 sys 2m4.652s 00:42:47.787 22:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:47.787 22:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:47.787 ************************************ 00:42:47.787 END TEST nvmf_target_core_interrupt_mode 00:42:47.787 ************************************ 00:42:47.787 22:31:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:47.787 22:31:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:47.787 22:31:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:47.787 22:31:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:47.787 ************************************ 00:42:47.787 START TEST nvmf_interrupt 00:42:47.787 ************************************ 00:42:47.788 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:47.788 * Looking for test storage... 00:42:47.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:47.788 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:47.788 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:42:47.788 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:48.049 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:48.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.050 --rc genhtml_branch_coverage=1 00:42:48.050 --rc genhtml_function_coverage=1 00:42:48.050 --rc genhtml_legend=1 00:42:48.050 --rc geninfo_all_blocks=1 00:42:48.050 --rc geninfo_unexecuted_blocks=1 00:42:48.050 00:42:48.050 ' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:48.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.050 --rc genhtml_branch_coverage=1 00:42:48.050 --rc genhtml_function_coverage=1 00:42:48.050 --rc genhtml_legend=1 00:42:48.050 --rc geninfo_all_blocks=1 00:42:48.050 --rc geninfo_unexecuted_blocks=1 00:42:48.050 00:42:48.050 ' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:48.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.050 --rc genhtml_branch_coverage=1 00:42:48.050 --rc genhtml_function_coverage=1 00:42:48.050 --rc genhtml_legend=1 00:42:48.050 --rc geninfo_all_blocks=1 00:42:48.050 --rc geninfo_unexecuted_blocks=1 00:42:48.050 00:42:48.050 ' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:48.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.050 --rc genhtml_branch_coverage=1 00:42:48.050 --rc genhtml_function_coverage=1 00:42:48.050 --rc genhtml_legend=1 00:42:48.050 --rc geninfo_all_blocks=1 00:42:48.050 --rc geninfo_unexecuted_blocks=1 00:42:48.050 00:42:48.050 ' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # prepare_net_devs 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@434 -- # local -g is_hw=no 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # remove_spdk_ns 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:42:48.050 22:31:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.197 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:56.197 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:42:56.197 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:56.198 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:56.198 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:56.198 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ up == up ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:56.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # is_hw=yes 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:56.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:56.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:42:56.198 00:42:56.198 --- 10.0.0.2 ping statistics --- 00:42:56.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:56.198 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:56.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:56.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:42:56.198 00:42:56.198 --- 10.0.0.1 ping statistics --- 00:42:56.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:56.198 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # return 0 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # nvmfpid=3872141 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # waitforlisten 3872141 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3872141 ']' 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:56.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:56.198 22:31:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.198 [2024-10-12 22:31:13.695877] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:56.198 [2024-10-12 22:31:13.696849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:42:56.198 [2024-10-12 22:31:13.696890] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:56.198 [2024-10-12 22:31:13.781448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:56.198 [2024-10-12 22:31:13.812888] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:56.198 [2024-10-12 22:31:13.812924] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:56.199 [2024-10-12 22:31:13.812932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:56.199 [2024-10-12 22:31:13.812938] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:56.199 [2024-10-12 22:31:13.812944] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:56.199 [2024-10-12 22:31:13.813081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:42:56.199 [2024-10-12 22:31:13.813083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.199 [2024-10-12 22:31:13.861422] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:56.199 [2024-10-12 22:31:13.861896] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:56.199 [2024-10-12 22:31:13.862271] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:56.199 5000+0 records in 00:42:56.199 5000+0 records out 00:42:56.199 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0192419 s, 532 MB/s 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 AIO0 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 [2024-10-12 22:31:14.618099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:56.199 [2024-10-12 22:31:14.670442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3872141 0 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 0 idle 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:42:56.199 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872141 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872141 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3872141 1 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 1 idle 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:42:56.461 22:31:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872185 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872185 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:56.721 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3872436 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3872141 0 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3872141 0 busy 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:42:56.722 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872141 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.34 reactor_0' 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872141 root 20 0 128.2g 44928 32256 R 66.7 0.0 0:00.34 reactor_0 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:56.983 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3872141 1 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3872141 1 busy 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872185 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1' 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872185 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.24 reactor_1 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:56.984 22:31:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3872436 00:43:06.986 Initializing NVMe Controllers 00:43:06.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:06.986 Controller IO queue size 256, less than required. 00:43:06.986 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:06.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:06.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:06.986 Initialization complete. Launching workers. 00:43:06.986 ======================================================== 00:43:06.986 Latency(us) 00:43:06.986 Device Information : IOPS MiB/s Average min max 00:43:06.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20191.50 78.87 12682.72 3771.59 32611.77 00:43:06.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18667.10 72.92 13715.78 7796.22 30701.23 00:43:06.987 ======================================================== 00:43:06.987 Total : 38858.60 151.79 13178.99 3771.59 32611.77 00:43:06.987 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3872141 0 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 0 idle 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872141 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0' 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872141 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3872141 1 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 1 idle 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:43:06.987 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872185 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872185 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:07.247 22:31:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:07.819 22:31:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:07.819 22:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:43:07.819 22:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:07.819 22:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:43:07.819 22:31:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3872141 0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 0 idle 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872141 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.61 reactor_0' 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872141 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.61 reactor_0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:10.364 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3872141 1 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3872141 1 idle 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3872141 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3872141 -w 256 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3872185 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3872185 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:10.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:10.365 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:10.365 rmmod nvme_tcp 00:43:10.626 rmmod nvme_fabrics 00:43:10.626 rmmod nvme_keyring 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@513 -- # '[' -n 3872141 ']' 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # killprocess 3872141 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3872141 ']' 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3872141 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3872141 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3872141' 00:43:10.626 killing process with pid 3872141 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3872141 00:43:10.626 22:31:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3872141 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-save 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@787 -- # iptables-restore 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:10.626 22:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:13.173 22:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:13.173 00:43:13.173 real 0m25.073s 00:43:13.173 user 0m40.270s 00:43:13.173 sys 0m9.516s 00:43:13.173 22:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:13.173 22:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:13.173 ************************************ 00:43:13.173 END TEST nvmf_interrupt 00:43:13.173 ************************************ 00:43:13.173 00:43:13.173 real 37m55.955s 00:43:13.173 user 91m52.726s 00:43:13.173 sys 11m20.968s 00:43:13.173 22:31:31 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:13.173 22:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:13.173 ************************************ 00:43:13.173 END TEST nvmf_tcp 00:43:13.173 ************************************ 00:43:13.173 22:31:31 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:43:13.173 22:31:31 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:13.173 22:31:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:13.173 22:31:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:13.173 22:31:31 -- common/autotest_common.sh@10 -- # set +x 00:43:13.173 ************************************ 00:43:13.173 START TEST spdkcli_nvmf_tcp 00:43:13.173 ************************************ 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:13.173 * Looking for test storage... 00:43:13.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:13.173 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:13.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.174 --rc genhtml_branch_coverage=1 00:43:13.174 --rc genhtml_function_coverage=1 00:43:13.174 --rc genhtml_legend=1 00:43:13.174 --rc geninfo_all_blocks=1 00:43:13.174 --rc geninfo_unexecuted_blocks=1 00:43:13.174 00:43:13.174 ' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:13.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.174 --rc genhtml_branch_coverage=1 00:43:13.174 --rc genhtml_function_coverage=1 00:43:13.174 --rc genhtml_legend=1 00:43:13.174 --rc geninfo_all_blocks=1 00:43:13.174 --rc geninfo_unexecuted_blocks=1 00:43:13.174 00:43:13.174 ' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:13.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.174 --rc genhtml_branch_coverage=1 00:43:13.174 --rc genhtml_function_coverage=1 00:43:13.174 --rc genhtml_legend=1 00:43:13.174 --rc geninfo_all_blocks=1 00:43:13.174 --rc geninfo_unexecuted_blocks=1 00:43:13.174 00:43:13.174 ' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:13.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.174 --rc genhtml_branch_coverage=1 00:43:13.174 --rc genhtml_function_coverage=1 00:43:13.174 --rc genhtml_legend=1 00:43:13.174 --rc geninfo_all_blocks=1 00:43:13.174 --rc geninfo_unexecuted_blocks=1 00:43:13.174 00:43:13.174 ' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:13.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3875613 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3875613 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3875613 ']' 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:13.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:13.174 22:31:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:13.174 [2024-10-12 22:31:31.592368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:13.174 [2024-10-12 22:31:31.592423] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875613 ] 00:43:13.436 [2024-10-12 22:31:31.670834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:13.436 [2024-10-12 22:31:31.707161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:13.436 [2024-10-12 22:31:31.707185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:14.007 22:31:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:14.007 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:14.007 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:14.008 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:14.008 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:14.008 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:14.008 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:14.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:14.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:14.008 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:14.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:14.008 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:14.008 ' 00:43:17.307 [2024-10-12 22:31:35.135125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:18.249 [2024-10-12 22:31:36.499252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:43:20.797 [2024-10-12 22:31:39.026259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:43:23.343 [2024-10-12 22:31:41.252714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:43:24.727 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:43:24.727 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:43:24.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:24.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:24.727 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:43:24.727 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:43:24.727 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:43:24.727 22:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:43:24.988 22:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:25.249 22:31:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:43:25.249 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:43:25.249 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:25.249 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:43:25.249 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:43:25.249 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:43:25.249 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:43:25.249 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:25.249 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:25.249 ' 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:31.836 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:31.836 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:31.836 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:31.836 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3875613 ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3875613' 00:43:31.836 killing process with pid 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3875613 ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3875613 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3875613 ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3875613 00:43:31.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3875613) - No such process 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3875613 is not found' 00:43:31.836 Process with pid 3875613 is not found 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:31.836 00:43:31.836 real 0m18.133s 00:43:31.836 user 0m40.309s 00:43:31.836 sys 0m0.871s 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:31.836 22:31:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:31.836 ************************************ 00:43:31.836 END TEST spdkcli_nvmf_tcp 00:43:31.836 ************************************ 00:43:31.836 22:31:49 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:31.836 22:31:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:31.836 22:31:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:31.836 22:31:49 -- common/autotest_common.sh@10 -- # set +x 00:43:31.836 ************************************ 00:43:31.836 START TEST nvmf_identify_passthru 00:43:31.836 ************************************ 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:31.836 * Looking for test storage... 00:43:31.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:31.836 22:31:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:31.836 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:31.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:31.836 --rc genhtml_branch_coverage=1 00:43:31.837 --rc genhtml_function_coverage=1 00:43:31.837 --rc genhtml_legend=1 00:43:31.837 --rc geninfo_all_blocks=1 00:43:31.837 --rc geninfo_unexecuted_blocks=1 00:43:31.837 00:43:31.837 ' 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:31.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:31.837 --rc genhtml_branch_coverage=1 00:43:31.837 --rc genhtml_function_coverage=1 00:43:31.837 --rc genhtml_legend=1 00:43:31.837 --rc geninfo_all_blocks=1 00:43:31.837 --rc geninfo_unexecuted_blocks=1 00:43:31.837 00:43:31.837 ' 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:31.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:31.837 --rc genhtml_branch_coverage=1 00:43:31.837 --rc genhtml_function_coverage=1 00:43:31.837 --rc genhtml_legend=1 00:43:31.837 --rc geninfo_all_blocks=1 00:43:31.837 --rc geninfo_unexecuted_blocks=1 00:43:31.837 00:43:31.837 ' 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:31.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:31.837 --rc genhtml_branch_coverage=1 00:43:31.837 --rc genhtml_function_coverage=1 00:43:31.837 --rc genhtml_legend=1 00:43:31.837 --rc geninfo_all_blocks=1 00:43:31.837 --rc geninfo_unexecuted_blocks=1 00:43:31.837 00:43:31.837 ' 00:43:31.837 22:31:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:31.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:31.837 22:31:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:31.837 22:31:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:31.837 22:31:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:31.837 22:31:49 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:31.837 22:31:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:38.526 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:38.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:38.526 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:38.526 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:38.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # is_hw=yes 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:38.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:38.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:43:38.527 00:43:38.527 --- 10.0.0.2 ping statistics --- 00:43:38.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:38.527 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:38.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:38.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:43:38.527 00:43:38.527 --- 10.0.0.1 ping statistics --- 00:43:38.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:38.527 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # return 0 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:38.527 22:31:56 nvmf_identify_passthru -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:38.527 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:38.527 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:38.527 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:38.788 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:38.788 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:43:38.789 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:43:38.789 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:43:38.789 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:43:38.789 22:31:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:43:38.789 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:43:38.789 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:43:38.789 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:43:38.789 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:38.789 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:39.360 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:43:39.360 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:43:39.360 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:39.360 22:31:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:39.629 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:43:39.629 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:39.629 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:39.629 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:39.890 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:39.890 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3883033 00:43:39.890 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:39.890 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:39.890 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3883033 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3883033 ']' 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:39.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:39.890 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:39.890 [2024-10-12 22:31:58.184699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:39.890 [2024-10-12 22:31:58.184749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:39.890 [2024-10-12 22:31:58.267213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:39.890 [2024-10-12 22:31:58.307031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:39.890 [2024-10-12 22:31:58.307074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:39.890 [2024-10-12 22:31:58.307083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:39.890 [2024-10-12 22:31:58.307090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:39.890 [2024-10-12 22:31:58.307095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:39.890 [2024-10-12 22:31:58.307175] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:43:39.890 [2024-10-12 22:31:58.307332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:43:39.890 [2024-10-12 22:31:58.307484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:39.890 [2024-10-12 22:31:58.307486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:43:40.832 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:40.832 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:43:40.832 22:31:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:40.832 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:40.832 22:31:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:40.832 INFO: Log level set to 20 00:43:40.832 INFO: Requests: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "method": "nvmf_set_config", 00:43:40.832 "id": 1, 00:43:40.832 "params": { 00:43:40.832 "admin_cmd_passthru": { 00:43:40.832 "identify_ctrlr": true 00:43:40.832 } 00:43:40.832 } 00:43:40.832 } 00:43:40.832 00:43:40.832 INFO: response: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "id": 1, 00:43:40.832 "result": true 00:43:40.832 } 00:43:40.832 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:40.832 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:40.832 INFO: Setting log level to 20 00:43:40.832 INFO: Setting log level to 20 00:43:40.832 INFO: Log level set to 20 00:43:40.832 INFO: Log level set to 20 00:43:40.832 INFO: Requests: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "method": "framework_start_init", 00:43:40.832 "id": 1 00:43:40.832 } 00:43:40.832 00:43:40.832 INFO: Requests: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "method": "framework_start_init", 00:43:40.832 "id": 1 00:43:40.832 } 00:43:40.832 00:43:40.832 [2024-10-12 22:31:59.065311] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:40.832 INFO: response: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "id": 1, 00:43:40.832 "result": true 00:43:40.832 } 00:43:40.832 00:43:40.832 INFO: response: 00:43:40.832 { 00:43:40.832 "jsonrpc": "2.0", 00:43:40.832 "id": 1, 00:43:40.832 "result": true 00:43:40.832 } 00:43:40.832 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:40.832 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:40.832 INFO: Setting log level to 40 00:43:40.832 INFO: Setting log level to 40 00:43:40.832 INFO: Setting log level to 40 00:43:40.832 [2024-10-12 22:31:59.078633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:40.832 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:40.832 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:40.832 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.094 Nvme0n1 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.094 [2024-10-12 22:31:59.464446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.094 [ 00:43:41.094 { 00:43:41.094 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:41.094 "subtype": "Discovery", 00:43:41.094 "listen_addresses": [], 00:43:41.094 "allow_any_host": true, 00:43:41.094 "hosts": [] 00:43:41.094 }, 00:43:41.094 { 00:43:41.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:41.094 "subtype": "NVMe", 00:43:41.094 "listen_addresses": [ 00:43:41.094 { 00:43:41.094 "trtype": "TCP", 00:43:41.094 "adrfam": "IPv4", 00:43:41.094 "traddr": "10.0.0.2", 00:43:41.094 "trsvcid": "4420" 00:43:41.094 } 00:43:41.094 ], 00:43:41.094 "allow_any_host": true, 00:43:41.094 "hosts": [], 00:43:41.094 "serial_number": "SPDK00000000000001", 00:43:41.094 "model_number": "SPDK bdev Controller", 00:43:41.094 "max_namespaces": 1, 00:43:41.094 "min_cntlid": 1, 00:43:41.094 "max_cntlid": 65519, 00:43:41.094 "namespaces": [ 00:43:41.094 { 00:43:41.094 "nsid": 1, 00:43:41.094 "bdev_name": "Nvme0n1", 00:43:41.094 "name": "Nvme0n1", 00:43:41.094 "nguid": "36344730526054870025384500000044", 00:43:41.094 "uuid": "36344730-5260-5487-0025-384500000044" 00:43:41.094 } 00:43:41.094 ] 00:43:41.094 } 00:43:41.094 ] 00:43:41.094 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:41.094 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:41.355 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:43:41.355 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:41.356 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:41.356 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:41.356 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:41.356 22:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@512 -- # nvmfcleanup 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:41.356 rmmod nvme_tcp 00:43:41.356 rmmod nvme_fabrics 00:43:41.356 rmmod nvme_keyring 00:43:41.356 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:41.616 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:43:41.616 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:43:41.616 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@513 -- # '[' -n 3883033 ']' 00:43:41.616 22:31:59 nvmf_identify_passthru -- nvmf/common.sh@514 -- # killprocess 3883033 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3883033 ']' 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3883033 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3883033 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3883033' 00:43:41.616 killing process with pid 3883033 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3883033 00:43:41.616 22:31:59 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3883033 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-save 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@787 -- # iptables-restore 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:41.877 22:32:00 nvmf_identify_passthru -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:41.877 22:32:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:41.877 22:32:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:43.791 22:32:02 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:43.791 00:43:43.791 real 0m12.751s 00:43:43.791 user 0m9.923s 00:43:43.791 sys 0m6.224s 00:43:43.791 22:32:02 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:43.791 22:32:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:43.791 ************************************ 00:43:43.791 END TEST nvmf_identify_passthru 00:43:43.791 ************************************ 00:43:44.052 22:32:02 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:44.052 22:32:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:44.052 22:32:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:44.052 22:32:02 -- common/autotest_common.sh@10 -- # set +x 00:43:44.052 ************************************ 00:43:44.052 START TEST nvmf_dif 00:43:44.052 ************************************ 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:43:44.052 * Looking for test storage... 00:43:44.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:44.052 22:32:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:44.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.052 --rc genhtml_branch_coverage=1 00:43:44.052 --rc genhtml_function_coverage=1 00:43:44.052 --rc genhtml_legend=1 00:43:44.052 --rc geninfo_all_blocks=1 00:43:44.052 --rc geninfo_unexecuted_blocks=1 00:43:44.052 00:43:44.052 ' 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:44.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.052 --rc genhtml_branch_coverage=1 00:43:44.052 --rc genhtml_function_coverage=1 00:43:44.052 --rc genhtml_legend=1 00:43:44.052 --rc geninfo_all_blocks=1 00:43:44.052 --rc geninfo_unexecuted_blocks=1 00:43:44.052 00:43:44.052 ' 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:44.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.052 --rc genhtml_branch_coverage=1 00:43:44.052 --rc genhtml_function_coverage=1 00:43:44.052 --rc genhtml_legend=1 00:43:44.052 --rc geninfo_all_blocks=1 00:43:44.052 --rc geninfo_unexecuted_blocks=1 00:43:44.052 00:43:44.052 ' 00:43:44.052 22:32:02 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:44.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:44.052 --rc genhtml_branch_coverage=1 00:43:44.052 --rc genhtml_function_coverage=1 00:43:44.052 --rc genhtml_legend=1 00:43:44.052 --rc geninfo_all_blocks=1 00:43:44.052 --rc geninfo_unexecuted_blocks=1 00:43:44.052 00:43:44.052 ' 00:43:44.052 22:32:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:44.052 22:32:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:44.313 22:32:02 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:44.313 22:32:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:44.313 22:32:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:44.313 22:32:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:44.313 22:32:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:44.313 22:32:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.314 22:32:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.314 22:32:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.314 22:32:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:44.314 22:32:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:44.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:44.314 22:32:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:44.314 22:32:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:44.314 22:32:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:44.314 22:32:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:44.314 22:32:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:44.314 22:32:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:44.314 22:32:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:43:44.314 22:32:02 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:43:44.314 22:32:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:52.454 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:52.454 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:52.454 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@414 -- # [[ up == up ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:52.454 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@438 -- # is_hw=yes 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:52.454 22:32:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:52.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:52.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:43:52.455 00:43:52.455 --- 10.0.0.2 ping statistics --- 00:43:52.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.455 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:52.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:52.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:43:52.455 00:43:52.455 --- 10.0.0.1 ping statistics --- 00:43:52.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:52.455 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@446 -- # return 0 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:43:52.455 22:32:09 nvmf_dif -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:55.000 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:43:55.000 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:43:55.000 22:32:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:55.000 22:32:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:55.000 22:32:13 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:55.001 22:32:13 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=3888884 00:43:55.001 22:32:13 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 3888884 00:43:55.001 22:32:13 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3888884 ']' 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:55.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:55.001 22:32:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:55.261 [2024-10-12 22:32:13.489050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:43:55.261 [2024-10-12 22:32:13.489100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:55.261 [2024-10-12 22:32:13.570521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:55.261 [2024-10-12 22:32:13.601358] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:55.261 [2024-10-12 22:32:13.601390] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:55.261 [2024-10-12 22:32:13.601398] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:55.261 [2024-10-12 22:32:13.601405] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:55.261 [2024-10-12 22:32:13.601410] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:55.261 [2024-10-12 22:32:13.601428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:43:55.832 22:32:14 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:55.832 22:32:14 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:43:55.832 22:32:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:43:55.832 22:32:14 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:55.832 22:32:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 22:32:14 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:56.093 22:32:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:56.093 22:32:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 [2024-10-12 22:32:14.359125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:56.093 22:32:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 ************************************ 00:43:56.093 START TEST fio_dif_1_default 00:43:56.093 ************************************ 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 bdev_null0 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:56.093 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:56.094 [2024-10-12 22:32:14.447545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:43:56.094 { 00:43:56.094 "params": { 00:43:56.094 "name": "Nvme$subsystem", 00:43:56.094 "trtype": "$TEST_TRANSPORT", 00:43:56.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:56.094 "adrfam": "ipv4", 00:43:56.094 "trsvcid": "$NVMF_PORT", 00:43:56.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:56.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:56.094 "hdgst": ${hdgst:-false}, 00:43:56.094 "ddgst": ${ddgst:-false} 00:43:56.094 }, 00:43:56.094 "method": "bdev_nvme_attach_controller" 00:43:56.094 } 00:43:56.094 EOF 00:43:56.094 )") 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:43:56.094 "params": { 00:43:56.094 "name": "Nvme0", 00:43:56.094 "trtype": "tcp", 00:43:56.094 "traddr": "10.0.0.2", 00:43:56.094 "adrfam": "ipv4", 00:43:56.094 "trsvcid": "4420", 00:43:56.094 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:56.094 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:56.094 "hdgst": false, 00:43:56.094 "ddgst": false 00:43:56.094 }, 00:43:56.094 "method": "bdev_nvme_attach_controller" 00:43:56.094 }' 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:56.094 22:32:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:56.663 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:56.663 fio-3.35 00:43:56.663 Starting 1 thread 00:44:08.874 00:44:08.874 filename0: (groupid=0, jobs=1): err= 0: pid=3889415: Sat Oct 12 22:32:25 2024 00:44:08.874 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10027msec) 00:44:08.874 slat (nsec): min=5405, max=67416, avg=6333.14, stdev=2684.96 00:44:08.874 clat (usec): min=40804, max=44130, avg=41073.01, stdev=329.42 00:44:08.874 lat (usec): min=40809, max=44174, avg=41079.35, stdev=330.58 00:44:08.874 clat percentiles (usec): 00:44:08.874 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:08.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:08.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:44:08.874 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:44:08.874 | 99.99th=[44303] 00:44:08.874 bw ( KiB/s): min= 352, max= 416, per=99.65%, avg=388.80, stdev=15.66, samples=20 00:44:08.874 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:44:08.874 lat (msec) : 50=100.00% 00:44:08.874 cpu : usr=93.44%, sys=6.31%, ctx=13, majf=0, minf=279 00:44:08.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:08.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:08.874 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:08.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:08.874 00:44:08.874 Run status group 0 (all jobs): 00:44:08.874 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10027-10027msec 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 00:44:08.874 real 0m11.148s 00:44:08.874 user 0m18.417s 00:44:08.874 sys 0m1.076s 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 ************************************ 00:44:08.874 END TEST fio_dif_1_default 00:44:08.874 ************************************ 00:44:08.874 22:32:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:08.874 22:32:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:08.874 22:32:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 ************************************ 00:44:08.874 START TEST fio_dif_1_multi_subsystems 00:44:08.874 ************************************ 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 bdev_null0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 [2024-10-12 22:32:25.677695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 bdev_null1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:08.874 { 00:44:08.874 "params": { 00:44:08.874 "name": "Nvme$subsystem", 00:44:08.874 "trtype": "$TEST_TRANSPORT", 00:44:08.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:08.874 "adrfam": "ipv4", 00:44:08.874 "trsvcid": "$NVMF_PORT", 00:44:08.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:08.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:08.874 "hdgst": ${hdgst:-false}, 00:44:08.874 "ddgst": ${ddgst:-false} 00:44:08.874 }, 00:44:08.874 "method": "bdev_nvme_attach_controller" 00:44:08.874 } 00:44:08.874 EOF 00:44:08.874 )") 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:08.874 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:08.875 { 00:44:08.875 "params": { 00:44:08.875 "name": "Nvme$subsystem", 00:44:08.875 "trtype": "$TEST_TRANSPORT", 00:44:08.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:08.875 "adrfam": "ipv4", 00:44:08.875 "trsvcid": "$NVMF_PORT", 00:44:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:08.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:08.875 "hdgst": ${hdgst:-false}, 00:44:08.875 "ddgst": ${ddgst:-false} 00:44:08.875 }, 00:44:08.875 "method": "bdev_nvme_attach_controller" 00:44:08.875 } 00:44:08.875 EOF 00:44:08.875 )") 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:08.875 "params": { 00:44:08.875 "name": "Nvme0", 00:44:08.875 "trtype": "tcp", 00:44:08.875 "traddr": "10.0.0.2", 00:44:08.875 "adrfam": "ipv4", 00:44:08.875 "trsvcid": "4420", 00:44:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:08.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:08.875 "hdgst": false, 00:44:08.875 "ddgst": false 00:44:08.875 }, 00:44:08.875 "method": "bdev_nvme_attach_controller" 00:44:08.875 },{ 00:44:08.875 "params": { 00:44:08.875 "name": "Nvme1", 00:44:08.875 "trtype": "tcp", 00:44:08.875 "traddr": "10.0.0.2", 00:44:08.875 "adrfam": "ipv4", 00:44:08.875 "trsvcid": "4420", 00:44:08.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:08.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:08.875 "hdgst": false, 00:44:08.875 "ddgst": false 00:44:08.875 }, 00:44:08.875 "method": "bdev_nvme_attach_controller" 00:44:08.875 }' 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:08.875 22:32:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:08.875 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:08.875 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:08.875 fio-3.35 00:44:08.875 Starting 2 threads 00:44:18.869 00:44:18.869 filename0: (groupid=0, jobs=1): err= 0: pid=3891816: Sat Oct 12 22:32:37 2024 00:44:18.869 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10008msec) 00:44:18.869 slat (nsec): min=5408, max=38395, avg=6317.40, stdev=2043.07 00:44:18.869 clat (usec): min=562, max=42536, avg=40829.86, stdev=2581.54 00:44:18.869 lat (usec): min=568, max=42569, avg=40836.18, stdev=2581.65 00:44:18.869 clat percentiles (usec): 00:44:18.869 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:18.869 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:18.869 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:18.869 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:44:18.869 | 99.99th=[42730] 00:44:18.869 bw ( KiB/s): min= 384, max= 416, per=49.39%, avg=390.40, stdev=13.13, samples=20 00:44:18.869 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:44:18.869 lat (usec) : 750=0.41% 00:44:18.869 lat (msec) : 50=99.59% 00:44:18.869 cpu : usr=95.22%, sys=4.55%, ctx=32, majf=0, minf=84 00:44:18.869 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.870 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.870 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:18.870 filename1: (groupid=0, jobs=1): err= 0: pid=3891817: Sat Oct 12 22:32:37 2024 00:44:18.870 read: IOPS=99, BW=398KiB/s (408kB/s)(3984KiB/10010msec) 00:44:18.870 slat (nsec): min=5425, max=32185, avg=6360.14, stdev=1884.30 00:44:18.870 clat (usec): min=446, max=41549, avg=40181.44, stdev=5682.98 00:44:18.870 lat (usec): min=452, max=41555, avg=40187.80, stdev=5683.08 00:44:18.870 clat percentiles (usec): 00:44:18.870 | 1.00th=[ 461], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:18.870 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:18.870 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:18.870 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:44:18.870 | 99.99th=[41681] 00:44:18.870 bw ( KiB/s): min= 384, max= 448, per=50.15%, avg=396.80, stdev=19.14, samples=20 00:44:18.870 iops : min= 96, max= 112, avg=99.20, stdev= 4.79, samples=20 00:44:18.870 lat (usec) : 500=1.61%, 750=0.40% 00:44:18.870 lat (msec) : 50=97.99% 00:44:18.870 cpu : usr=95.37%, sys=4.44%, ctx=9, majf=0, minf=218 00:44:18.870 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:18.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:18.870 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:18.870 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:18.870 00:44:18.870 Run status group 0 (all jobs): 00:44:18.870 READ: bw=790KiB/s (809kB/s), 392KiB/s-398KiB/s (401kB/s-408kB/s), io=7904KiB (8094kB), run=10008-10010msec 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 00:44:18.870 real 0m11.551s 00:44:18.870 user 0m35.968s 00:44:18.870 sys 0m1.260s 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 ************************************ 00:44:18.870 END TEST fio_dif_1_multi_subsystems 00:44:18.870 ************************************ 00:44:18.870 22:32:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:44:18.870 22:32:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:18.870 22:32:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 ************************************ 00:44:18.870 START TEST fio_dif_rand_params 00:44:18.870 ************************************ 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 bdev_null0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:18.870 [2024-10-12 22:32:37.311183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:18.870 { 00:44:18.870 "params": { 00:44:18.870 "name": "Nvme$subsystem", 00:44:18.870 "trtype": "$TEST_TRANSPORT", 00:44:18.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:18.870 "adrfam": "ipv4", 00:44:18.870 "trsvcid": "$NVMF_PORT", 00:44:18.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:18.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:18.870 "hdgst": ${hdgst:-false}, 00:44:18.870 "ddgst": ${ddgst:-false} 00:44:18.870 }, 00:44:18.870 "method": "bdev_nvme_attach_controller" 00:44:18.870 } 00:44:18.870 EOF 00:44:18.870 )") 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:18.870 22:32:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:18.870 "params": { 00:44:18.870 "name": "Nvme0", 00:44:18.870 "trtype": "tcp", 00:44:18.870 "traddr": "10.0.0.2", 00:44:18.870 "adrfam": "ipv4", 00:44:18.870 "trsvcid": "4420", 00:44:18.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:18.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:18.871 "hdgst": false, 00:44:18.871 "ddgst": false 00:44:18.871 }, 00:44:18.871 "method": "bdev_nvme_attach_controller" 00:44:18.871 }' 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:19.153 22:32:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:19.420 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:19.420 ... 00:44:19.420 fio-3.35 00:44:19.420 Starting 3 threads 00:44:26.001 00:44:26.001 filename0: (groupid=0, jobs=1): err= 0: pid=3894123: Sat Oct 12 22:32:43 2024 00:44:26.001 read: IOPS=319, BW=39.9MiB/s (41.9MB/s)(202MiB/5048msec) 00:44:26.001 slat (nsec): min=5460, max=31810, avg=6182.46, stdev=1307.91 00:44:26.001 clat (usec): min=5206, max=87212, avg=9360.20, stdev=4613.39 00:44:26.001 lat (usec): min=5212, max=87218, avg=9366.38, stdev=4613.62 00:44:26.001 clat percentiles (usec): 00:44:26.001 | 1.00th=[ 6325], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8029], 00:44:26.001 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:44:26.001 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10421], 00:44:26.001 | 99.00th=[46924], 99.50th=[47449], 99.90th=[49546], 99.95th=[87557], 00:44:26.001 | 99.99th=[87557] 00:44:26.001 bw ( KiB/s): min=29892, max=47104, per=34.15%, avg=41184.40, stdev=4472.91, samples=10 00:44:26.001 iops : min= 233, max= 368, avg=321.70, stdev=35.09, samples=10 00:44:26.001 lat (msec) : 10=88.52%, 20=10.30%, 50=1.12%, 100=0.06% 00:44:26.001 cpu : usr=93.98%, sys=5.81%, ctx=7, majf=0, minf=171 00:44:26.001 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:26.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 issued rwts: total=1612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:26.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:26.001 filename0: (groupid=0, jobs=1): err= 0: pid=3894124: Sat Oct 12 22:32:43 2024 00:44:26.001 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(200MiB/5046msec) 00:44:26.001 slat (nsec): min=5537, max=29798, avg=8588.41, stdev=1109.50 00:44:26.001 clat (usec): min=5027, max=49096, avg=9411.35, stdev=4801.75 00:44:26.001 lat (usec): min=5035, max=49104, avg=9419.93, stdev=4801.74 00:44:26.001 clat percentiles (usec): 00:44:26.001 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7832], 00:44:26.001 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:44:26.001 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:44:26.001 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:44:26.001 | 99.99th=[49021] 00:44:26.001 bw ( KiB/s): min=33280, max=43520, per=33.96%, avg=40960.00, stdev=3821.94, samples=10 00:44:26.001 iops : min= 260, max= 340, avg=320.00, stdev=29.86, samples=10 00:44:26.001 lat (msec) : 10=84.52%, 20=14.04%, 50=1.44% 00:44:26.001 cpu : usr=95.22%, sys=4.54%, ctx=6, majf=0, minf=96 00:44:26.001 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:26.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:26.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:26.001 filename0: (groupid=0, jobs=1): err= 0: pid=3894125: Sat Oct 12 22:32:43 2024 00:44:26.001 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(193MiB/5047msec) 00:44:26.001 slat (nsec): min=5459, max=32466, avg=8311.40, stdev=1477.88 00:44:26.001 clat (usec): min=5524, max=89316, avg=9755.25, stdev=5001.73 00:44:26.001 lat (usec): min=5533, max=89326, avg=9763.56, stdev=5002.00 00:44:26.001 clat percentiles (usec): 00:44:26.001 | 1.00th=[ 5997], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8291], 00:44:26.001 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:44:26.001 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:44:26.001 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50594], 99.95th=[89654], 00:44:26.001 | 99.99th=[89654] 00:44:26.001 bw ( KiB/s): min=29696, max=41472, per=32.67%, avg=39398.40, stdev=3535.83, samples=10 00:44:26.001 iops : min= 232, max= 324, avg=307.80, stdev=27.62, samples=10 00:44:26.001 lat (msec) : 10=76.65%, 20=21.98%, 50=1.17%, 100=0.19% 00:44:26.001 cpu : usr=95.26%, sys=4.52%, ctx=6, majf=0, minf=150 00:44:26.001 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:26.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:26.001 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:26.001 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:26.001 00:44:26.001 Run status group 0 (all jobs): 00:44:26.001 READ: bw=118MiB/s (123MB/s), 38.2MiB/s-39.9MiB/s (40.0MB/s-41.9MB/s), io=595MiB (623MB), run=5046-5048msec 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:44:26.001 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 bdev_null0 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 [2024-10-12 22:32:43.552287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 bdev_null1 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 bdev_null2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:26.002 { 00:44:26.002 "params": { 00:44:26.002 "name": "Nvme$subsystem", 00:44:26.002 "trtype": "$TEST_TRANSPORT", 00:44:26.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:26.002 "adrfam": "ipv4", 00:44:26.002 "trsvcid": "$NVMF_PORT", 00:44:26.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:26.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:26.002 "hdgst": ${hdgst:-false}, 00:44:26.002 "ddgst": ${ddgst:-false} 00:44:26.002 }, 00:44:26.002 "method": "bdev_nvme_attach_controller" 00:44:26.002 } 00:44:26.002 EOF 00:44:26.002 )") 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:26.002 { 00:44:26.002 "params": { 00:44:26.002 "name": "Nvme$subsystem", 00:44:26.002 "trtype": "$TEST_TRANSPORT", 00:44:26.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:26.002 "adrfam": "ipv4", 00:44:26.002 "trsvcid": "$NVMF_PORT", 00:44:26.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:26.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:26.002 "hdgst": ${hdgst:-false}, 00:44:26.002 "ddgst": ${ddgst:-false} 00:44:26.002 }, 00:44:26.002 "method": "bdev_nvme_attach_controller" 00:44:26.002 } 00:44:26.002 EOF 00:44:26.002 )") 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:26.002 { 00:44:26.002 "params": { 00:44:26.002 "name": "Nvme$subsystem", 00:44:26.002 "trtype": "$TEST_TRANSPORT", 00:44:26.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:26.002 "adrfam": "ipv4", 00:44:26.002 "trsvcid": "$NVMF_PORT", 00:44:26.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:26.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:26.002 "hdgst": ${hdgst:-false}, 00:44:26.002 "ddgst": ${ddgst:-false} 00:44:26.002 }, 00:44:26.002 "method": "bdev_nvme_attach_controller" 00:44:26.002 } 00:44:26.002 EOF 00:44:26.002 )") 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:26.002 22:32:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:26.002 "params": { 00:44:26.002 "name": "Nvme0", 00:44:26.002 "trtype": "tcp", 00:44:26.002 "traddr": "10.0.0.2", 00:44:26.002 "adrfam": "ipv4", 00:44:26.002 "trsvcid": "4420", 00:44:26.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:26.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:26.003 "hdgst": false, 00:44:26.003 "ddgst": false 00:44:26.003 }, 00:44:26.003 "method": "bdev_nvme_attach_controller" 00:44:26.003 },{ 00:44:26.003 "params": { 00:44:26.003 "name": "Nvme1", 00:44:26.003 "trtype": "tcp", 00:44:26.003 "traddr": "10.0.0.2", 00:44:26.003 "adrfam": "ipv4", 00:44:26.003 "trsvcid": "4420", 00:44:26.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:26.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:26.003 "hdgst": false, 00:44:26.003 "ddgst": false 00:44:26.003 }, 00:44:26.003 "method": "bdev_nvme_attach_controller" 00:44:26.003 },{ 00:44:26.003 "params": { 00:44:26.003 "name": "Nvme2", 00:44:26.003 "trtype": "tcp", 00:44:26.003 "traddr": "10.0.0.2", 00:44:26.003 "adrfam": "ipv4", 00:44:26.003 "trsvcid": "4420", 00:44:26.003 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:26.003 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:26.003 "hdgst": false, 00:44:26.003 "ddgst": false 00:44:26.003 }, 00:44:26.003 "method": "bdev_nvme_attach_controller" 00:44:26.003 }' 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:26.003 22:32:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:26.003 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:26.003 ... 00:44:26.003 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:26.003 ... 00:44:26.003 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:44:26.003 ... 00:44:26.003 fio-3.35 00:44:26.003 Starting 24 threads 00:44:38.233 00:44:38.233 filename0: (groupid=0, jobs=1): err= 0: pid=3895350: Sat Oct 12 22:32:54 2024 00:44:38.233 read: IOPS=679, BW=2718KiB/s (2783kB/s)(26.5MiB/10003msec) 00:44:38.233 slat (nsec): min=5567, max=81971, avg=7937.11, stdev=5082.87 00:44:38.233 clat (usec): min=1158, max=33261, avg=23482.47, stdev=2768.98 00:44:38.233 lat (usec): min=1168, max=33267, avg=23490.41, stdev=2768.13 00:44:38.233 clat percentiles (usec): 00:44:38.233 | 1.00th=[ 8094], 5.00th=[18220], 10.00th=[22938], 20.00th=[23462], 00:44:38.233 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.233 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:44:38.233 | 99.00th=[26608], 99.50th=[28967], 99.90th=[29492], 99.95th=[29754], 00:44:38.233 | 99.99th=[33162] 00:44:38.233 bw ( KiB/s): min= 2560, max= 3120, per=4.19%, avg=2712.32, stdev=133.26, samples=19 00:44:38.233 iops : min= 640, max= 780, avg=678.00, stdev=33.31, samples=19 00:44:38.233 lat (msec) : 2=0.09%, 4=0.47%, 10=0.62%, 20=4.97%, 50=93.85% 00:44:38.233 cpu : usr=98.82%, sys=0.91%, ctx=13, majf=0, minf=83 00:44:38.233 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895351: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=698, BW=2795KiB/s (2862kB/s)(27.3MiB/10018msec) 00:44:38.234 slat (nsec): min=5107, max=69615, avg=10773.00, stdev=8538.16 00:44:38.234 clat (usec): min=10037, max=47291, avg=22824.03, stdev=4365.69 00:44:38.234 lat (usec): min=10043, max=47297, avg=22834.80, stdev=4366.82 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[13566], 5.00th=[15270], 10.00th=[16712], 20.00th=[19268], 00:44:38.234 | 30.00th=[21627], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:44:38.234 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[30540], 00:44:38.234 | 99.00th=[36439], 99.50th=[38536], 99.90th=[42206], 99.95th=[47449], 00:44:38.234 | 99.99th=[47449] 00:44:38.234 bw ( KiB/s): min= 2576, max= 3072, per=4.32%, avg=2794.53, stdev=139.07, samples=19 00:44:38.234 iops : min= 644, max= 768, avg=698.53, stdev=34.71, samples=19 00:44:38.234 lat (msec) : 20=23.69%, 50=76.31% 00:44:38.234 cpu : usr=98.59%, sys=0.97%, ctx=126, majf=0, minf=100 00:44:38.234 IO depths : 1=1.3%, 2=2.8%, 4=9.0%, 8=74.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=90.0%, 8=6.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=7000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895353: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10007msec) 00:44:38.234 slat (usec): min=5, max=116, avg=25.74, stdev=17.88 00:44:38.234 clat (usec): min=9058, max=37811, avg=24054.33, stdev=1761.02 00:44:38.234 lat (usec): min=9074, max=37836, avg=24080.07, stdev=1758.75 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:44:38.234 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.234 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:44:38.234 | 99.00th=[31327], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:44:38.234 | 99.99th=[38011] 00:44:38.234 bw ( KiB/s): min= 2432, max= 2693, per=4.06%, avg=2626.32, stdev=78.70, samples=19 00:44:38.234 iops : min= 608, max= 673, avg=656.42, stdev=19.71, samples=19 00:44:38.234 lat (msec) : 10=0.03%, 20=0.93%, 50=99.04% 00:44:38.234 cpu : usr=98.91%, sys=0.82%, ctx=14, majf=0, minf=69 00:44:38.234 IO depths : 1=6.0%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895354: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=672, BW=2689KiB/s (2754kB/s)(26.3MiB/10015msec) 00:44:38.234 slat (nsec): min=5554, max=92061, avg=17361.29, stdev=13765.39 00:44:38.234 clat (usec): min=8166, max=45324, avg=23666.84, stdev=4552.70 00:44:38.234 lat (usec): min=8191, max=45351, avg=23684.20, stdev=4554.06 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[13566], 5.00th=[16057], 10.00th=[17433], 20.00th=[21627], 00:44:38.234 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:44:38.234 | 70.00th=[24249], 80.00th=[24773], 90.00th=[28443], 95.00th=[32375], 00:44:38.234 | 99.00th=[39584], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:44:38.234 | 99.99th=[45351] 00:44:38.234 bw ( KiB/s): min= 2432, max= 2976, per=4.16%, avg=2687.95, stdev=107.75, samples=19 00:44:38.234 iops : min= 608, max= 744, avg=671.84, stdev=26.91, samples=19 00:44:38.234 lat (msec) : 10=0.33%, 20=14.93%, 50=84.75% 00:44:38.234 cpu : usr=98.80%, sys=0.93%, ctx=16, majf=0, minf=49 00:44:38.234 IO depths : 1=3.4%, 2=7.0%, 4=16.8%, 8=63.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895355: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=698, BW=2794KiB/s (2861kB/s)(27.3MiB/10004msec) 00:44:38.234 slat (usec): min=5, max=114, avg=17.98, stdev=16.27 00:44:38.234 clat (usec): min=6468, max=44423, avg=22784.79, stdev=4135.00 00:44:38.234 lat (usec): min=6474, max=44441, avg=22802.77, stdev=4137.26 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[13566], 5.00th=[15533], 10.00th=[16909], 20.00th=[19530], 00:44:38.234 | 30.00th=[22414], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:44:38.234 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26084], 95.00th=[29230], 00:44:38.234 | 99.00th=[35390], 99.50th=[37487], 99.90th=[44303], 99.95th=[44303], 00:44:38.234 | 99.99th=[44303] 00:44:38.234 bw ( KiB/s): min= 2432, max= 3049, per=4.30%, avg=2779.32, stdev=152.31, samples=19 00:44:38.234 iops : min= 608, max= 762, avg=694.74, stdev=38.07, samples=19 00:44:38.234 lat (msec) : 10=0.37%, 20=21.48%, 50=78.15% 00:44:38.234 cpu : usr=99.00%, sys=0.72%, ctx=31, majf=0, minf=46 00:44:38.234 IO depths : 1=1.2%, 2=2.8%, 4=9.5%, 8=73.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895356: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=663, BW=2656KiB/s (2719kB/s)(25.9MiB/10001msec) 00:44:38.234 slat (usec): min=5, max=106, avg=22.20, stdev=18.22 00:44:38.234 clat (usec): min=5968, max=47689, avg=23886.22, stdev=2095.83 00:44:38.234 lat (usec): min=5973, max=47706, avg=23908.42, stdev=2094.18 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[16712], 5.00th=[22938], 10.00th=[22938], 20.00th=[23200], 00:44:38.234 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.234 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:44:38.234 | 99.00th=[31589], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:44:38.234 | 99.99th=[47449] 00:44:38.234 bw ( KiB/s): min= 2436, max= 2816, per=4.09%, avg=2645.68, stdev=84.96, samples=19 00:44:38.234 iops : min= 609, max= 704, avg=661.32, stdev=21.24, samples=19 00:44:38.234 lat (msec) : 10=0.24%, 20=1.30%, 50=98.46% 00:44:38.234 cpu : usr=98.67%, sys=1.01%, ctx=74, majf=0, minf=42 00:44:38.234 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895357: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=783, BW=3133KiB/s (3208kB/s)(30.7MiB/10019msec) 00:44:38.234 slat (usec): min=5, max=102, avg=10.57, stdev= 9.36 00:44:38.234 clat (usec): min=5625, max=37563, avg=20347.11, stdev=4196.67 00:44:38.234 lat (usec): min=5636, max=37570, avg=20357.68, stdev=4199.72 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[10814], 5.00th=[14353], 10.00th=[15139], 20.00th=[16188], 00:44:38.234 | 30.00th=[16909], 40.00th=[17957], 50.00th=[22938], 60.00th=[23462], 00:44:38.234 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24249], 95.00th=[24773], 00:44:38.234 | 99.00th=[28705], 99.50th=[31065], 99.90th=[35390], 99.95th=[35390], 00:44:38.234 | 99.99th=[37487] 00:44:38.234 bw ( KiB/s): min= 2560, max= 3888, per=4.84%, avg=3131.00, stdev=529.25, samples=20 00:44:38.234 iops : min= 640, max= 972, avg=782.70, stdev=132.26, samples=20 00:44:38.234 lat (msec) : 10=0.36%, 20=44.60%, 50=55.05% 00:44:38.234 cpu : usr=98.55%, sys=0.98%, ctx=72, majf=0, minf=74 00:44:38.234 IO depths : 1=2.9%, 2=6.1%, 4=15.5%, 8=65.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=91.4%, 8=3.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=7848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename0: (groupid=0, jobs=1): err= 0: pid=3895358: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.1MiB/10011msec) 00:44:38.234 slat (nsec): min=5610, max=78709, avg=15964.06, stdev=11522.81 00:44:38.234 clat (usec): min=12547, max=35569, avg=23877.86, stdev=1335.63 00:44:38.234 lat (usec): min=12557, max=35575, avg=23893.83, stdev=1335.44 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[16712], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:44:38.234 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.234 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.234 | 99.00th=[26084], 99.50th=[27919], 99.90th=[34341], 99.95th=[34866], 00:44:38.234 | 99.99th=[35390] 00:44:38.234 bw ( KiB/s): min= 2554, max= 2816, per=4.12%, avg=2666.84, stdev=64.57, samples=19 00:44:38.234 iops : min= 638, max= 704, avg=666.63, stdev=16.18, samples=19 00:44:38.234 lat (msec) : 20=1.44%, 50=98.56% 00:44:38.234 cpu : usr=98.83%, sys=0.86%, ctx=91, majf=0, minf=44 00:44:38.234 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:44:38.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.234 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.234 filename1: (groupid=0, jobs=1): err= 0: pid=3895359: Sat Oct 12 22:32:54 2024 00:44:38.234 read: IOPS=663, BW=2654KiB/s (2717kB/s)(25.9MiB/10003msec) 00:44:38.234 slat (usec): min=5, max=103, avg=24.71, stdev=16.45 00:44:38.234 clat (usec): min=11619, max=44705, avg=23899.54, stdev=1950.79 00:44:38.234 lat (usec): min=11632, max=44723, avg=23924.25, stdev=1949.40 00:44:38.234 clat percentiles (usec): 00:44:38.234 | 1.00th=[15664], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:44:38.235 | 99.00th=[32375], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:44:38.235 | 99.99th=[44827] 00:44:38.235 bw ( KiB/s): min= 2427, max= 2816, per=4.09%, avg=2644.37, stdev=93.55, samples=19 00:44:38.235 iops : min= 606, max= 704, avg=660.95, stdev=23.48, samples=19 00:44:38.235 lat (msec) : 20=2.08%, 50=97.92% 00:44:38.235 cpu : usr=98.63%, sys=0.90%, ctx=121, majf=0, minf=52 00:44:38.235 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895360: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=667, BW=2668KiB/s (2732kB/s)(26.1MiB/10003msec) 00:44:38.235 slat (usec): min=5, max=100, avg=21.66, stdev=15.59 00:44:38.235 clat (usec): min=2854, max=47076, avg=23806.84, stdev=2841.37 00:44:38.235 lat (usec): min=2860, max=47095, avg=23828.51, stdev=2841.45 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[14222], 5.00th=[19268], 10.00th=[22938], 20.00th=[23200], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26084], 00:44:38.235 | 99.00th=[34341], 99.50th=[36963], 99.90th=[43254], 99.95th=[46924], 00:44:38.235 | 99.99th=[46924] 00:44:38.235 bw ( KiB/s): min= 2436, max= 2800, per=4.10%, avg=2648.21, stdev=80.96, samples=19 00:44:38.235 iops : min= 609, max= 700, avg=661.95, stdev=20.22, samples=19 00:44:38.235 lat (msec) : 4=0.18%, 10=0.19%, 20=5.23%, 50=94.40% 00:44:38.235 cpu : usr=99.07%, sys=0.65%, ctx=32, majf=0, minf=47 00:44:38.235 IO depths : 1=3.3%, 2=8.5%, 4=22.0%, 8=56.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895361: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10006msec) 00:44:38.235 slat (nsec): min=5608, max=80045, avg=17231.01, stdev=11677.35 00:44:38.235 clat (usec): min=12502, max=34644, avg=23844.29, stdev=1342.67 00:44:38.235 lat (usec): min=12513, max=34665, avg=23861.52, stdev=1342.57 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[16909], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.235 | 99.00th=[26346], 99.50th=[26346], 99.90th=[34341], 99.95th=[34341], 00:44:38.235 | 99.99th=[34866] 00:44:38.235 bw ( KiB/s): min= 2554, max= 2816, per=4.12%, avg=2666.47, stdev=64.49, samples=19 00:44:38.235 iops : min= 638, max= 704, avg=666.47, stdev=16.14, samples=19 00:44:38.235 lat (msec) : 20=1.56%, 50=98.44% 00:44:38.235 cpu : usr=98.20%, sys=1.28%, ctx=227, majf=0, minf=72 00:44:38.235 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895362: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10009msec) 00:44:38.235 slat (usec): min=5, max=105, avg=19.55, stdev=15.28 00:44:38.235 clat (usec): min=12272, max=39246, avg=23840.31, stdev=2235.58 00:44:38.235 lat (usec): min=12289, max=39255, avg=23859.87, stdev=2235.31 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[15401], 5.00th=[20841], 10.00th=[22938], 20.00th=[23462], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:44:38.235 | 99.00th=[31851], 99.50th=[34341], 99.90th=[38536], 99.95th=[39060], 00:44:38.235 | 99.99th=[39060] 00:44:38.235 bw ( KiB/s): min= 2554, max= 2736, per=4.11%, avg=2660.26, stdev=57.47, samples=19 00:44:38.235 iops : min= 638, max= 684, avg=664.89, stdev=14.43, samples=19 00:44:38.235 lat (msec) : 20=4.12%, 50=95.88% 00:44:38.235 cpu : usr=98.83%, sys=0.83%, ctx=100, majf=0, minf=72 00:44:38.235 IO depths : 1=4.0%, 2=9.4%, 4=22.4%, 8=55.6%, 16=8.7%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895363: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=663, BW=2655KiB/s (2719kB/s)(26.0MiB/10012msec) 00:44:38.235 slat (nsec): min=5560, max=98630, avg=18277.20, stdev=16191.39 00:44:38.235 clat (usec): min=10325, max=36229, avg=23962.44, stdev=1738.40 00:44:38.235 lat (usec): min=10335, max=36237, avg=23980.72, stdev=1736.37 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[16909], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:44:38.235 | 99.00th=[31065], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:44:38.235 | 99.99th=[36439] 00:44:38.235 bw ( KiB/s): min= 2554, max= 2810, per=4.10%, avg=2652.74, stdev=70.32, samples=19 00:44:38.235 iops : min= 638, max= 702, avg=663.05, stdev=17.57, samples=19 00:44:38.235 lat (msec) : 20=2.12%, 50=97.88% 00:44:38.235 cpu : usr=98.89%, sys=0.81%, ctx=70, majf=0, minf=75 00:44:38.235 IO depths : 1=5.4%, 2=11.2%, 4=23.6%, 8=52.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895364: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=663, BW=2653KiB/s (2716kB/s)(25.9MiB/10007msec) 00:44:38.235 slat (usec): min=5, max=118, avg=23.66, stdev=18.93 00:44:38.235 clat (usec): min=13068, max=42103, avg=23916.27, stdev=1794.33 00:44:38.235 lat (usec): min=13075, max=42132, avg=23939.93, stdev=1792.35 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[16450], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:44:38.235 | 99.00th=[32375], 99.50th=[34341], 99.90th=[41681], 99.95th=[42206], 00:44:38.235 | 99.99th=[42206] 00:44:38.235 bw ( KiB/s): min= 2560, max= 2693, per=4.10%, avg=2649.20, stdev=58.17, samples=20 00:44:38.235 iops : min= 640, max= 673, avg=662.20, stdev=14.58, samples=20 00:44:38.235 lat (msec) : 20=1.57%, 50=98.43% 00:44:38.235 cpu : usr=98.78%, sys=0.93%, ctx=24, majf=0, minf=58 00:44:38.235 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895366: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:44:38.235 slat (nsec): min=5622, max=86563, avg=17359.64, stdev=12808.74 00:44:38.235 clat (usec): min=12601, max=34721, avg=23887.10, stdev=1210.17 00:44:38.235 lat (usec): min=12613, max=34730, avg=23904.46, stdev=1209.42 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[20317], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:44:38.235 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:44:38.235 | 99.00th=[26084], 99.50th=[26870], 99.90th=[34866], 99.95th=[34866], 00:44:38.235 | 99.99th=[34866] 00:44:38.235 bw ( KiB/s): min= 2554, max= 2816, per=4.11%, avg=2659.79, stdev=68.66, samples=19 00:44:38.235 iops : min= 638, max= 704, avg=664.84, stdev=17.18, samples=19 00:44:38.235 lat (msec) : 20=0.84%, 50=99.16% 00:44:38.235 cpu : usr=98.08%, sys=1.22%, ctx=251, majf=0, minf=58 00:44:38.235 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename1: (groupid=0, jobs=1): err= 0: pid=3895367: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10006msec) 00:44:38.235 slat (nsec): min=5605, max=80613, avg=12776.21, stdev=9487.35 00:44:38.235 clat (usec): min=2891, max=41566, avg=23717.57, stdev=2248.58 00:44:38.235 lat (usec): min=2913, max=41575, avg=23730.35, stdev=2248.00 00:44:38.235 clat percentiles (usec): 00:44:38.235 | 1.00th=[13173], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:44:38.235 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.235 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.235 | 99.00th=[26084], 99.50th=[26608], 99.90th=[34341], 99.95th=[41681], 00:44:38.235 | 99.99th=[41681] 00:44:38.235 bw ( KiB/s): min= 2554, max= 2944, per=4.16%, avg=2687.37, stdev=96.46, samples=19 00:44:38.235 iops : min= 638, max= 736, avg=671.79, stdev=24.16, samples=19 00:44:38.235 lat (msec) : 4=0.24%, 10=0.48%, 20=2.32%, 50=96.96% 00:44:38.235 cpu : usr=98.94%, sys=0.78%, ctx=24, majf=0, minf=57 00:44:38.235 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:38.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.235 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.235 filename2: (groupid=0, jobs=1): err= 0: pid=3895368: Sat Oct 12 22:32:54 2024 00:44:38.235 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10021msec) 00:44:38.235 slat (nsec): min=5722, max=77467, avg=15639.14, stdev=10783.72 00:44:38.235 clat (usec): min=12588, max=35274, avg=23956.45, stdev=1325.13 00:44:38.235 lat (usec): min=12603, max=35283, avg=23972.08, stdev=1324.55 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[20579], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:44:38.236 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.236 | 99.00th=[27395], 99.50th=[31851], 99.90th=[35390], 99.95th=[35390], 00:44:38.236 | 99.99th=[35390] 00:44:38.236 bw ( KiB/s): min= 2560, max= 2693, per=4.11%, avg=2657.20, stdev=57.06, samples=20 00:44:38.236 iops : min= 640, max= 673, avg=664.20, stdev=14.23, samples=20 00:44:38.236 lat (msec) : 20=0.78%, 50=99.22% 00:44:38.236 cpu : usr=98.91%, sys=0.80%, ctx=23, majf=0, minf=52 00:44:38.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895369: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.0MiB/10003msec) 00:44:38.236 slat (usec): min=5, max=100, avg=22.37, stdev=17.65 00:44:38.236 clat (usec): min=2894, max=55248, avg=23844.10, stdev=2709.13 00:44:38.236 lat (usec): min=2901, max=55274, avg=23866.47, stdev=2709.41 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[13960], 5.00th=[20579], 10.00th=[22938], 20.00th=[23462], 00:44:38.236 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26346], 00:44:38.236 | 99.00th=[33162], 99.50th=[36439], 99.90th=[43254], 99.95th=[43254], 00:44:38.236 | 99.99th=[55313] 00:44:38.236 bw ( KiB/s): min= 2436, max= 2816, per=4.10%, avg=2652.42, stdev=79.93, samples=19 00:44:38.236 iops : min= 609, max= 704, avg=663.00, stdev=20.00, samples=19 00:44:38.236 lat (msec) : 4=0.17%, 10=0.32%, 20=4.07%, 50=95.42%, 100=0.03% 00:44:38.236 cpu : usr=98.65%, sys=0.89%, ctx=85, majf=0, minf=55 00:44:38.236 IO depths : 1=1.9%, 2=5.3%, 4=15.0%, 8=64.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=92.3%, 8=3.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895370: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=662, BW=2649KiB/s (2712kB/s)(25.9MiB/10004msec) 00:44:38.236 slat (usec): min=5, max=106, avg=23.19, stdev=16.99 00:44:38.236 clat (usec): min=14590, max=47162, avg=23955.97, stdev=1375.65 00:44:38.236 lat (usec): min=14597, max=47184, avg=23979.15, stdev=1373.05 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:44:38.236 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.236 | 99.00th=[29492], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:44:38.236 | 99.99th=[46924] 00:44:38.236 bw ( KiB/s): min= 2554, max= 2688, per=4.08%, avg=2639.26, stdev=62.93, samples=19 00:44:38.236 iops : min= 638, max= 672, avg=659.68, stdev=15.70, samples=19 00:44:38.236 lat (msec) : 20=0.45%, 50=99.55% 00:44:38.236 cpu : usr=98.88%, sys=0.77%, ctx=143, majf=0, minf=52 00:44:38.236 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895371: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=653, BW=2612KiB/s (2675kB/s)(25.5MiB/10003msec) 00:44:38.236 slat (usec): min=5, max=117, avg=15.89, stdev=14.02 00:44:38.236 clat (usec): min=3045, max=56409, avg=24424.03, stdev=4161.84 00:44:38.236 lat (usec): min=3051, max=56432, avg=24439.92, stdev=4162.42 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[13960], 5.00th=[17957], 10.00th=[20317], 20.00th=[23200], 00:44:38.236 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23987], 60.00th=[24249], 00:44:38.236 | 70.00th=[24511], 80.00th=[25560], 90.00th=[29230], 95.00th=[31851], 00:44:38.236 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44303], 99.95th=[56361], 00:44:38.236 | 99.99th=[56361] 00:44:38.236 bw ( KiB/s): min= 2324, max= 2768, per=4.02%, avg=2602.79, stdev=107.85, samples=19 00:44:38.236 iops : min= 581, max= 692, avg=650.58, stdev=26.99, samples=19 00:44:38.236 lat (msec) : 4=0.06%, 10=0.43%, 20=8.73%, 50=90.71%, 100=0.08% 00:44:38.236 cpu : usr=98.86%, sys=0.85%, ctx=26, majf=0, minf=95 00:44:38.236 IO depths : 1=0.4%, 2=0.8%, 4=5.4%, 8=78.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=89.6%, 8=7.9%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895372: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10003msec) 00:44:38.236 slat (usec): min=5, max=101, avg=18.60, stdev=16.85 00:44:38.236 clat (usec): min=6202, max=38771, avg=23553.99, stdev=2862.71 00:44:38.236 lat (usec): min=6211, max=38793, avg=23572.60, stdev=2863.57 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[13566], 5.00th=[17695], 10.00th=[21365], 20.00th=[23200], 00:44:38.236 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:44:38.236 | 99.00th=[33162], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:44:38.236 | 99.99th=[38536] 00:44:38.236 bw ( KiB/s): min= 2560, max= 2832, per=4.17%, avg=2696.84, stdev=74.50, samples=19 00:44:38.236 iops : min= 640, max= 708, avg=674.11, stdev=18.66, samples=19 00:44:38.236 lat (msec) : 10=0.24%, 20=7.53%, 50=92.23% 00:44:38.236 cpu : usr=98.72%, sys=0.83%, ctx=105, majf=0, minf=54 00:44:38.236 IO depths : 1=0.4%, 2=2.8%, 4=10.7%, 8=71.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=91.3%, 8=5.7%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895373: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=663, BW=2654KiB/s (2717kB/s)(25.9MiB/10003msec) 00:44:38.236 slat (usec): min=5, max=106, avg=24.98, stdev=17.13 00:44:38.236 clat (usec): min=3996, max=54911, avg=23885.94, stdev=2036.67 00:44:38.236 lat (usec): min=4001, max=54928, avg=23910.92, stdev=2035.98 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:44:38.236 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.236 | 99.00th=[30016], 99.50th=[33424], 99.90th=[43254], 99.95th=[43254], 00:44:38.236 | 99.99th=[54789] 00:44:38.236 bw ( KiB/s): min= 2404, max= 2816, per=4.08%, avg=2638.11, stdev=87.52, samples=19 00:44:38.236 iops : min= 601, max= 704, avg=659.42, stdev=21.85, samples=19 00:44:38.236 lat (msec) : 4=0.02%, 10=0.33%, 20=1.22%, 50=98.40%, 100=0.03% 00:44:38.236 cpu : usr=98.69%, sys=0.83%, ctx=125, majf=0, minf=43 00:44:38.236 IO depths : 1=5.7%, 2=11.7%, 4=24.1%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895374: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=688, BW=2754KiB/s (2820kB/s)(26.9MiB/10012msec) 00:44:38.236 slat (usec): min=4, max=105, avg=19.52, stdev=17.38 00:44:38.236 clat (usec): min=7540, max=43354, avg=23074.07, stdev=4016.06 00:44:38.236 lat (usec): min=7548, max=43376, avg=23093.59, stdev=4018.43 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[13435], 5.00th=[15795], 10.00th=[16909], 20.00th=[20579], 00:44:38.236 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:44:38.236 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26084], 95.00th=[29754], 00:44:38.236 | 99.00th=[36963], 99.50th=[38536], 99.90th=[42206], 99.95th=[43254], 00:44:38.236 | 99.99th=[43254] 00:44:38.236 bw ( KiB/s): min= 2656, max= 3104, per=4.27%, avg=2759.47, stdev=121.31, samples=19 00:44:38.236 iops : min= 664, max= 776, avg=689.79, stdev=30.33, samples=19 00:44:38.236 lat (msec) : 10=0.06%, 20=17.99%, 50=81.96% 00:44:38.236 cpu : usr=98.22%, sys=1.16%, ctx=175, majf=0, minf=46 00:44:38.236 IO depths : 1=2.8%, 2=5.7%, 4=14.2%, 8=66.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.236 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.236 filename2: (groupid=0, jobs=1): err= 0: pid=3895376: Sat Oct 12 22:32:54 2024 00:44:38.236 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10008msec) 00:44:38.236 slat (usec): min=5, max=142, avg=15.52, stdev=17.50 00:44:38.236 clat (usec): min=8226, max=33490, avg=23981.76, stdev=1110.52 00:44:38.236 lat (usec): min=8235, max=33498, avg=23997.28, stdev=1108.08 00:44:38.236 clat percentiles (usec): 00:44:38.236 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:44:38.236 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:44:38.236 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:44:38.236 | 99.00th=[26084], 99.50th=[30540], 99.90th=[33424], 99.95th=[33424], 00:44:38.236 | 99.99th=[33424] 00:44:38.236 bw ( KiB/s): min= 2554, max= 2693, per=4.10%, avg=2648.26, stdev=62.38, samples=19 00:44:38.236 iops : min= 638, max= 673, avg=661.95, stdev=15.57, samples=19 00:44:38.236 lat (msec) : 10=0.03%, 20=0.45%, 50=99.52% 00:44:38.236 cpu : usr=98.72%, sys=0.82%, ctx=99, majf=0, minf=38 00:44:38.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:44:38.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:38.237 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:38.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:44:38.237 00:44:38.237 Run status group 0 (all jobs): 00:44:38.237 READ: bw=63.1MiB/s (66.2MB/s), 2612KiB/s-3133KiB/s (2675kB/s-3208kB/s), io=633MiB (664MB), run=10001-10021msec 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 bdev_null0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 [2024-10-12 22:32:55.178255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 bdev_null1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:38.237 { 00:44:38.237 "params": { 00:44:38.237 "name": "Nvme$subsystem", 00:44:38.237 "trtype": "$TEST_TRANSPORT", 00:44:38.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:38.237 "adrfam": "ipv4", 00:44:38.237 "trsvcid": "$NVMF_PORT", 00:44:38.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:38.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:38.237 "hdgst": ${hdgst:-false}, 00:44:38.237 "ddgst": ${ddgst:-false} 00:44:38.237 }, 00:44:38.237 "method": "bdev_nvme_attach_controller" 00:44:38.237 } 00:44:38.237 EOF 00:44:38.237 )") 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:44:38.237 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:38.238 { 00:44:38.238 "params": { 00:44:38.238 "name": "Nvme$subsystem", 00:44:38.238 "trtype": "$TEST_TRANSPORT", 00:44:38.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:38.238 "adrfam": "ipv4", 00:44:38.238 "trsvcid": "$NVMF_PORT", 00:44:38.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:38.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:38.238 "hdgst": ${hdgst:-false}, 00:44:38.238 "ddgst": ${ddgst:-false} 00:44:38.238 }, 00:44:38.238 "method": "bdev_nvme_attach_controller" 00:44:38.238 } 00:44:38.238 EOF 00:44:38.238 )") 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:38.238 "params": { 00:44:38.238 "name": "Nvme0", 00:44:38.238 "trtype": "tcp", 00:44:38.238 "traddr": "10.0.0.2", 00:44:38.238 "adrfam": "ipv4", 00:44:38.238 "trsvcid": "4420", 00:44:38.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:38.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:38.238 "hdgst": false, 00:44:38.238 "ddgst": false 00:44:38.238 }, 00:44:38.238 "method": "bdev_nvme_attach_controller" 00:44:38.238 },{ 00:44:38.238 "params": { 00:44:38.238 "name": "Nvme1", 00:44:38.238 "trtype": "tcp", 00:44:38.238 "traddr": "10.0.0.2", 00:44:38.238 "adrfam": "ipv4", 00:44:38.238 "trsvcid": "4420", 00:44:38.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:38.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:38.238 "hdgst": false, 00:44:38.238 "ddgst": false 00:44:38.238 }, 00:44:38.238 "method": "bdev_nvme_attach_controller" 00:44:38.238 }' 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:38.238 22:32:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:38.238 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:38.238 ... 00:44:38.238 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:44:38.238 ... 00:44:38.238 fio-3.35 00:44:38.238 Starting 4 threads 00:44:43.548 00:44:43.548 filename0: (groupid=0, jobs=1): err= 0: pid=3897833: Sat Oct 12 22:33:01 2024 00:44:43.548 read: IOPS=2967, BW=23.2MiB/s (24.3MB/s)(116MiB/5003msec) 00:44:43.548 slat (nsec): min=5401, max=98491, avg=8616.13, stdev=2720.64 00:44:43.548 clat (usec): min=823, max=4033, avg=2672.94, stdev=227.78 00:44:43.548 lat (usec): min=840, max=4041, avg=2681.56, stdev=227.55 00:44:43.548 clat percentiles (usec): 00:44:43.548 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2540], 00:44:43.548 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:44:43.548 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2933], 00:44:43.548 | 99.00th=[ 3458], 99.50th=[ 3621], 99.90th=[ 3720], 99.95th=[ 3818], 00:44:43.548 | 99.99th=[ 4015] 00:44:43.548 bw ( KiB/s): min=23568, max=24320, per=25.24%, avg=23786.67, stdev=240.27, samples=9 00:44:43.548 iops : min= 2946, max= 3040, avg=2973.33, stdev=30.03, samples=9 00:44:43.548 lat (usec) : 1000=0.09% 00:44:43.548 lat (msec) : 2=1.06%, 4=98.83%, 10=0.02% 00:44:43.548 cpu : usr=96.32%, sys=3.38%, ctx=11, majf=0, minf=96 00:44:43.548 IO depths : 1=0.1%, 2=0.2%, 4=69.9%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:43.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 issued rwts: total=14848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:43.548 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:43.548 filename0: (groupid=0, jobs=1): err= 0: pid=3897834: Sat Oct 12 22:33:01 2024 00:44:43.548 read: IOPS=2865, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:44:43.548 slat (nsec): min=7876, max=67426, avg=8938.68, stdev=2679.79 00:44:43.548 clat (usec): min=1333, max=4617, avg=2767.41, stdev=334.32 00:44:43.548 lat (usec): min=1342, max=4625, avg=2776.34, stdev=334.19 00:44:43.548 clat percentiles (usec): 00:44:43.548 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2606], 00:44:43.548 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:44:43.548 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3752], 00:44:43.548 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 4359], 99.95th=[ 4359], 00:44:43.548 | 99.99th=[ 4621] 00:44:43.548 bw ( KiB/s): min=22256, max=23440, per=24.29%, avg=22895.56, stdev=441.20, samples=9 00:44:43.548 iops : min= 2782, max= 2930, avg=2861.89, stdev=55.21, samples=9 00:44:43.548 lat (msec) : 2=0.13%, 4=97.86%, 10=2.01% 00:44:43.548 cpu : usr=96.34%, sys=3.40%, ctx=7, majf=0, minf=64 00:44:43.548 IO depths : 1=0.1%, 2=0.1%, 4=73.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:43.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 issued rwts: total=14331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:43.548 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:43.548 filename1: (groupid=0, jobs=1): err= 0: pid=3897835: Sat Oct 12 22:33:01 2024 00:44:43.548 read: IOPS=2933, BW=22.9MiB/s (24.0MB/s)(115MiB/5001msec) 00:44:43.548 slat (nsec): min=5408, max=98136, avg=6298.49, stdev=2187.73 00:44:43.548 clat (usec): min=1672, max=5827, avg=2710.58, stdev=196.86 00:44:43.548 lat (usec): min=1696, max=5853, avg=2716.87, stdev=196.99 00:44:43.548 clat percentiles (usec): 00:44:43.548 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:44:43.548 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:44:43.548 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 2966], 00:44:43.548 | 99.00th=[ 3359], 99.50th=[ 3687], 99.90th=[ 4359], 99.95th=[ 4686], 00:44:43.548 | 99.99th=[ 5800] 00:44:43.548 bw ( KiB/s): min=23152, max=23744, per=24.92%, avg=23482.67, stdev=182.60, samples=9 00:44:43.548 iops : min= 2894, max= 2968, avg=2935.33, stdev=22.83, samples=9 00:44:43.548 lat (msec) : 2=0.18%, 4=99.58%, 10=0.23% 00:44:43.548 cpu : usr=96.74%, sys=2.96%, ctx=38, majf=0, minf=83 00:44:43.548 IO depths : 1=0.1%, 2=0.1%, 4=70.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:43.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 issued rwts: total=14671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:43.548 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:43.548 filename1: (groupid=0, jobs=1): err= 0: pid=3897836: Sat Oct 12 22:33:01 2024 00:44:43.548 read: IOPS=3016, BW=23.6MiB/s (24.7MB/s)(118MiB/5001msec) 00:44:43.548 slat (nsec): min=5396, max=65399, avg=6056.26, stdev=1704.98 00:44:43.548 clat (usec): min=680, max=5220, avg=2636.62, stdev=381.20 00:44:43.548 lat (usec): min=685, max=5240, avg=2642.68, stdev=381.22 00:44:43.548 clat percentiles (usec): 00:44:43.548 | 1.00th=[ 1975], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2343], 00:44:43.548 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2704], 00:44:43.548 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 3130], 95.00th=[ 3490], 00:44:43.548 | 99.00th=[ 3884], 99.50th=[ 4047], 99.90th=[ 4080], 99.95th=[ 4113], 00:44:43.548 | 99.99th=[ 4228] 00:44:43.548 bw ( KiB/s): min=23520, max=24592, per=25.58%, avg=24103.11, stdev=386.67, samples=9 00:44:43.548 iops : min= 2940, max= 3074, avg=3012.89, stdev=48.33, samples=9 00:44:43.548 lat (usec) : 750=0.03%, 1000=0.01% 00:44:43.548 lat (msec) : 2=1.84%, 4=97.49%, 10=0.63% 00:44:43.548 cpu : usr=96.66%, sys=2.98%, ctx=112, majf=0, minf=111 00:44:43.548 IO depths : 1=0.1%, 2=0.4%, 4=69.0%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:43.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:43.548 issued rwts: total=15086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:43.548 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:43.548 00:44:43.548 Run status group 0 (all jobs): 00:44:43.548 READ: bw=92.0MiB/s (96.5MB/s), 22.4MiB/s-23.6MiB/s (23.5MB/s-24.7MB/s), io=460MiB (483MB), run=5001-5003msec 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.548 00:44:43.548 real 0m24.258s 00:44:43.548 user 5m18.834s 00:44:43.548 sys 0m4.679s 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 ************************************ 00:44:43.548 END TEST fio_dif_rand_params 00:44:43.548 ************************************ 00:44:43.548 22:33:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:44:43.548 22:33:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:43.548 22:33:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:43.548 22:33:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.548 ************************************ 00:44:43.548 START TEST fio_dif_digest 00:44:43.548 ************************************ 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:43.548 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:43.549 bdev_null0 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:43.549 [2024-10-12 22:33:01.652611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:44:43.549 { 00:44:43.549 "params": { 00:44:43.549 "name": "Nvme$subsystem", 00:44:43.549 "trtype": "$TEST_TRANSPORT", 00:44:43.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.549 "adrfam": "ipv4", 00:44:43.549 "trsvcid": "$NVMF_PORT", 00:44:43.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.549 "hdgst": ${hdgst:-false}, 00:44:43.549 "ddgst": ${ddgst:-false} 00:44:43.549 }, 00:44:43.549 "method": "bdev_nvme_attach_controller" 00:44:43.549 } 00:44:43.549 EOF 00:44:43.549 )") 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:44:43.549 "params": { 00:44:43.549 "name": "Nvme0", 00:44:43.549 "trtype": "tcp", 00:44:43.549 "traddr": "10.0.0.2", 00:44:43.549 "adrfam": "ipv4", 00:44:43.549 "trsvcid": "4420", 00:44:43.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.549 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:43.549 "hdgst": true, 00:44:43.549 "ddgst": true 00:44:43.549 }, 00:44:43.549 "method": "bdev_nvme_attach_controller" 00:44:43.549 }' 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:43.549 22:33:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.811 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:43.811 ... 00:44:43.811 fio-3.35 00:44:43.811 Starting 3 threads 00:44:56.059 00:44:56.059 filename0: (groupid=0, jobs=1): err= 0: pid=3899140: Sat Oct 12 22:33:12 2024 00:44:56.059 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(379MiB/10045msec) 00:44:56.059 slat (nsec): min=5726, max=35800, avg=6839.88, stdev=1361.50 00:44:56.059 clat (usec): min=6711, max=49985, avg=9919.71, stdev=1308.65 00:44:56.059 lat (usec): min=6718, max=49992, avg=9926.55, stdev=1308.60 00:44:56.059 clat percentiles (usec): 00:44:56.059 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:44:56.059 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:44:56.059 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:44:56.059 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13042], 99.95th=[45351], 00:44:56.059 | 99.99th=[50070] 00:44:56.059 bw ( KiB/s): min=37376, max=39936, per=32.93%, avg=38771.20, stdev=785.65, samples=20 00:44:56.059 iops : min= 292, max= 312, avg=302.90, stdev= 6.14, samples=20 00:44:56.059 lat (msec) : 10=55.33%, 20=44.61%, 50=0.07% 00:44:56.059 cpu : usr=93.33%, sys=6.41%, ctx=33, majf=0, minf=167 00:44:56.059 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 issued rwts: total=3031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:56.059 filename0: (groupid=0, jobs=1): err= 0: pid=3899141: Sat Oct 12 22:33:12 2024 00:44:56.059 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(387MiB/10046msec) 00:44:56.059 slat (nsec): min=5718, max=30911, avg=6781.21, stdev=1222.34 00:44:56.059 clat (usec): min=7252, max=48831, avg=9718.63, stdev=1305.74 00:44:56.059 lat (usec): min=7258, max=48837, avg=9725.41, stdev=1305.77 00:44:56.059 clat percentiles (usec): 00:44:56.059 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:44:56.059 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:44:56.059 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:44:56.059 | 99.00th=[12256], 99.50th=[12387], 99.90th=[16188], 99.95th=[45876], 00:44:56.059 | 99.99th=[49021] 00:44:56.059 bw ( KiB/s): min=38144, max=41472, per=33.62%, avg=39577.60, stdev=1021.98, samples=20 00:44:56.059 iops : min= 298, max= 324, avg=309.20, stdev= 7.98, samples=20 00:44:56.059 lat (msec) : 10=67.84%, 20=32.09%, 50=0.06% 00:44:56.059 cpu : usr=93.48%, sys=6.25%, ctx=32, majf=0, minf=147 00:44:56.059 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 issued rwts: total=3094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:56.059 filename0: (groupid=0, jobs=1): err= 0: pid=3899142: Sat Oct 12 22:33:12 2024 00:44:56.059 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(390MiB/10048msec) 00:44:56.059 slat (nsec): min=5763, max=72376, avg=7392.12, stdev=1933.64 00:44:56.059 clat (usec): min=6184, max=48933, avg=9648.54, stdev=1313.35 00:44:56.059 lat (usec): min=6193, max=48940, avg=9655.93, stdev=1313.32 00:44:56.059 clat percentiles (usec): 00:44:56.059 | 1.00th=[ 7701], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8848], 00:44:56.059 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:44:56.059 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:44:56.059 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12387], 99.95th=[47973], 00:44:56.059 | 99.99th=[49021] 00:44:56.059 bw ( KiB/s): min=38400, max=42240, per=33.87%, avg=39872.00, stdev=916.93, samples=20 00:44:56.059 iops : min= 300, max= 330, avg=311.50, stdev= 7.16, samples=20 00:44:56.059 lat (msec) : 10=66.12%, 20=33.81%, 50=0.06% 00:44:56.059 cpu : usr=93.63%, sys=6.11%, ctx=23, majf=0, minf=167 00:44:56.059 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.059 issued rwts: total=3117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:56.059 00:44:56.059 Run status group 0 (all jobs): 00:44:56.059 READ: bw=115MiB/s (121MB/s), 37.7MiB/s-38.8MiB/s (39.5MB/s-40.7MB/s), io=1155MiB (1211MB), run=10045-10048msec 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:56.059 00:44:56.059 real 0m11.174s 00:44:56.059 user 0m41.946s 00:44:56.059 sys 0m2.169s 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:56.059 22:33:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:56.059 ************************************ 00:44:56.059 END TEST fio_dif_digest 00:44:56.059 ************************************ 00:44:56.059 22:33:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:56.059 22:33:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:56.059 rmmod nvme_tcp 00:44:56.059 rmmod nvme_fabrics 00:44:56.059 rmmod nvme_keyring 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 3888884 ']' 00:44:56.059 22:33:12 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 3888884 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3888884 ']' 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3888884 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3888884 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3888884' 00:44:56.059 killing process with pid 3888884 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3888884 00:44:56.059 22:33:12 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3888884 00:44:56.059 22:33:13 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:44:56.059 22:33:13 nvmf_dif -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:57.974 Waiting for block devices as requested 00:44:58.235 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:58.235 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:58.235 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:58.495 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:58.495 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:58.495 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:58.495 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:58.756 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:58.756 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:44:59.017 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:44:59.017 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:44:59.017 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:44:59.277 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:44:59.277 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:44:59.277 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:44:59.538 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:44:59.538 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:59.798 22:33:18 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.798 22:33:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:59.798 22:33:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.345 22:33:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:02.345 00:45:02.345 real 1m17.946s 00:45:02.345 user 7m57.196s 00:45:02.345 sys 0m22.565s 00:45:02.345 22:33:20 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:02.345 22:33:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:02.345 ************************************ 00:45:02.345 END TEST nvmf_dif 00:45:02.345 ************************************ 00:45:02.345 22:33:20 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:02.345 22:33:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:02.345 22:33:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:02.345 22:33:20 -- common/autotest_common.sh@10 -- # set +x 00:45:02.345 ************************************ 00:45:02.345 START TEST nvmf_abort_qd_sizes 00:45:02.345 ************************************ 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:02.345 * Looking for test storage... 00:45:02.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:02.345 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.346 --rc genhtml_branch_coverage=1 00:45:02.346 --rc genhtml_function_coverage=1 00:45:02.346 --rc genhtml_legend=1 00:45:02.346 --rc geninfo_all_blocks=1 00:45:02.346 --rc geninfo_unexecuted_blocks=1 00:45:02.346 00:45:02.346 ' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.346 --rc genhtml_branch_coverage=1 00:45:02.346 --rc genhtml_function_coverage=1 00:45:02.346 --rc genhtml_legend=1 00:45:02.346 --rc geninfo_all_blocks=1 00:45:02.346 --rc geninfo_unexecuted_blocks=1 00:45:02.346 00:45:02.346 ' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.346 --rc genhtml_branch_coverage=1 00:45:02.346 --rc genhtml_function_coverage=1 00:45:02.346 --rc genhtml_legend=1 00:45:02.346 --rc geninfo_all_blocks=1 00:45:02.346 --rc geninfo_unexecuted_blocks=1 00:45:02.346 00:45:02.346 ' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:02.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:02.346 --rc genhtml_branch_coverage=1 00:45:02.346 --rc genhtml_function_coverage=1 00:45:02.346 --rc genhtml_legend=1 00:45:02.346 --rc geninfo_all_blocks=1 00:45:02.346 --rc geninfo_unexecuted_blocks=1 00:45:02.346 00:45:02.346 ' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:02.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:02.346 22:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:10.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:10.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:10.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ up == up ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:10.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # is_hw=yes 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:10.587 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:10.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:10.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:45:10.588 00:45:10.588 --- 10.0.0.2 ping statistics --- 00:45:10.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.588 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:10.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:10.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:45:10.588 00:45:10.588 --- 10.0.0.1 ping statistics --- 00:45:10.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:10.588 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # return 0 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:45:10.588 22:33:27 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:13.132 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:13.132 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:13.133 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=3909030 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 3909030 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3909030 ']' 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:13.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:13.393 22:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:13.393 [2024-10-12 22:33:31.873652] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:13.393 [2024-10-12 22:33:31.873742] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:13.653 [2024-10-12 22:33:31.968970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:13.654 [2024-10-12 22:33:32.009905] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:13.654 [2024-10-12 22:33:32.009944] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:13.654 [2024-10-12 22:33:32.009953] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:13.654 [2024-10-12 22:33:32.009959] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:13.654 [2024-10-12 22:33:32.009965] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:13.654 [2024-10-12 22:33:32.010132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:13.654 [2024-10-12 22:33:32.010286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:45:13.654 [2024-10-12 22:33:32.010404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.654 [2024-10-12 22:33:32.010406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:45:14.225 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:14.225 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:45:14.225 22:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:45:14.225 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:14.225 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:14.486 22:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:14.486 ************************************ 00:45:14.486 START TEST spdk_target_abort 00:45:14.486 ************************************ 00:45:14.486 22:33:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:45:14.486 22:33:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:45:14.486 22:33:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:45:14.486 22:33:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.486 22:33:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:14.746 spdk_targetn1 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:14.746 [2024-10-12 22:33:33.079935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:14.746 [2024-10-12 22:33:33.120240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:14.746 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:14.747 22:33:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:15.007 [2024-10-12 22:33:33.327682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:480 len:8 PRP1 0x2000078be000 PRP2 0x0 00:45:15.007 [2024-10-12 22:33:33.327715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:45:15.007 [2024-10-12 22:33:33.446663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2896 len:8 PRP1 0x2000078be000 PRP2 0x0 00:45:15.007 [2024-10-12 22:33:33.446688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:45:15.007 [2024-10-12 22:33:33.454660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3160 len:8 PRP1 0x2000078be000 PRP2 0x0 00:45:15.007 [2024-10-12 22:33:33.454680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:45:15.007 [2024-10-12 22:33:33.478979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3952 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:45:15.007 [2024-10-12 22:33:33.479001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f0 p:0 m:0 dnr:0 00:45:18.309 Initializing NVMe Controllers 00:45:18.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:18.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:18.309 Initialization complete. Launching workers. 00:45:18.309 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11259, failed: 4 00:45:18.309 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2247, failed to submit 9016 00:45:18.309 success 736, unsuccessful 1511, failed 0 00:45:18.309 22:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:18.309 22:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:18.309 [2024-10-12 22:33:36.542271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:2264 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:45:18.309 [2024-10-12 22:33:36.542312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:45:18.309 [2024-10-12 22:33:36.550271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2448 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:45:18.309 [2024-10-12 22:33:36.550295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:45:18.309 [2024-10-12 22:33:36.573209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2992 len:8 PRP1 0x200007c46000 PRP2 0x0 00:45:18.309 [2024-10-12 22:33:36.573235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:45:18.309 [2024-10-12 22:33:36.589318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:3384 len:8 PRP1 0x200007c3a000 PRP2 0x0 00:45:18.309 [2024-10-12 22:33:36.589340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:45:18.309 [2024-10-12 22:33:36.613410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:4008 len:8 PRP1 0x200007c40000 PRP2 0x0 00:45:18.309 [2024-10-12 22:33:36.613433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00fb p:0 m:0 dnr:0 00:45:19.694 [2024-10-12 22:33:37.820264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:32120 len:8 PRP1 0x200007c60000 PRP2 0x0 00:45:19.694 [2024-10-12 22:33:37.820301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:45:21.607 Initializing NVMe Controllers 00:45:21.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:21.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:21.607 Initialization complete. Launching workers. 00:45:21.607 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8793, failed: 6 00:45:21.607 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1214, failed to submit 7585 00:45:21.607 success 379, unsuccessful 835, failed 0 00:45:21.607 22:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:21.607 22:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:24.907 Initializing NVMe Controllers 00:45:24.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:45:24.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:24.907 Initialization complete. Launching workers. 00:45:24.907 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45029, failed: 0 00:45:24.907 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2487, failed to submit 42542 00:45:24.907 success 600, unsuccessful 1887, failed 0 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.907 22:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:26.292 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:26.292 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3909030 00:45:26.292 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3909030 ']' 00:45:26.292 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3909030 00:45:26.292 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3909030 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3909030' 00:45:26.293 killing process with pid 3909030 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3909030 00:45:26.293 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3909030 00:45:26.554 00:45:26.554 real 0m12.040s 00:45:26.554 user 0m49.159s 00:45:26.554 sys 0m1.910s 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:26.554 ************************************ 00:45:26.554 END TEST spdk_target_abort 00:45:26.554 ************************************ 00:45:26.554 22:33:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:45:26.554 22:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:26.554 22:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:26.554 22:33:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:26.554 ************************************ 00:45:26.554 START TEST kernel_target_abort 00:45:26.554 ************************************ 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:45:26.554 22:33:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:29.855 Waiting for block devices as requested 00:45:29.855 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:30.114 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:30.114 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:30.114 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:30.114 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:30.374 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:30.374 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:30.374 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:30.634 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:30.634 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:30.894 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:30.894 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:30.894 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:31.154 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:31.154 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:31.154 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:31.414 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:45:31.675 22:33:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:45:31.675 No valid GPT data, bailing 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme0n1 ]] 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme0n1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:45:31.675 00:45:31.675 Discovery Log Number of Records 2, Generation counter 2 00:45:31.675 =====Discovery Log Entry 0====== 00:45:31.675 trtype: tcp 00:45:31.675 adrfam: ipv4 00:45:31.675 subtype: current discovery subsystem 00:45:31.675 treq: not specified, sq flow control disable supported 00:45:31.675 portid: 1 00:45:31.675 trsvcid: 4420 00:45:31.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:45:31.675 traddr: 10.0.0.1 00:45:31.675 eflags: none 00:45:31.675 sectype: none 00:45:31.675 =====Discovery Log Entry 1====== 00:45:31.675 trtype: tcp 00:45:31.675 adrfam: ipv4 00:45:31.675 subtype: nvme subsystem 00:45:31.675 treq: not specified, sq flow control disable supported 00:45:31.675 portid: 1 00:45:31.675 trsvcid: 4420 00:45:31.675 subnqn: nqn.2016-06.io.spdk:testnqn 00:45:31.675 traddr: 10.0.0.1 00:45:31.675 eflags: none 00:45:31.675 sectype: none 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:31.675 22:33:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:34.975 Initializing NVMe Controllers 00:45:34.975 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:34.975 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:34.975 Initialization complete. Launching workers. 00:45:34.975 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67803, failed: 0 00:45:34.975 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67803, failed to submit 0 00:45:34.975 success 0, unsuccessful 67803, failed 0 00:45:34.975 22:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:34.975 22:33:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:38.272 Initializing NVMe Controllers 00:45:38.272 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:38.272 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:38.272 Initialization complete. Launching workers. 00:45:38.272 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121682, failed: 0 00:45:38.272 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30650, failed to submit 91032 00:45:38.272 success 0, unsuccessful 30650, failed 0 00:45:38.272 22:33:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:38.272 22:33:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:41.571 Initializing NVMe Controllers 00:45:41.571 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:45:41.571 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:45:41.571 Initialization complete. Launching workers. 00:45:41.571 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 147208, failed: 0 00:45:41.571 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36834, failed to submit 110374 00:45:41.571 success 0, unsuccessful 36834, failed 0 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:45:41.571 22:33:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:44.871 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:44.871 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:46.257 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:45:46.828 00:45:46.828 real 0m20.187s 00:45:46.828 user 0m9.950s 00:45:46.828 sys 0m5.918s 00:45:46.828 22:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:46.828 22:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:46.828 ************************************ 00:45:46.828 END TEST kernel_target_abort 00:45:46.828 ************************************ 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:46.828 rmmod nvme_tcp 00:45:46.828 rmmod nvme_fabrics 00:45:46.828 rmmod nvme_keyring 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 3909030 ']' 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 3909030 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3909030 ']' 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3909030 00:45:46.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3909030) - No such process 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3909030 is not found' 00:45:46.828 Process with pid 3909030 is not found 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:45:46.828 22:34:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:50.129 Waiting for block devices as requested 00:45:50.129 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:50.129 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:50.390 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:50.390 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:50.390 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:50.651 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:50.651 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:50.651 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:50.912 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:50.912 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:51.173 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:51.173 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:51.173 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:51.434 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:51.434 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:51.434 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:51.695 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:51.956 22:34:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:53.869 22:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:53.869 00:45:53.869 real 0m51.947s 00:45:53.869 user 1m4.551s 00:45:53.869 sys 0m18.711s 00:45:53.869 22:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:53.869 22:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:53.869 ************************************ 00:45:53.869 END TEST nvmf_abort_qd_sizes 00:45:53.869 ************************************ 00:45:54.131 22:34:12 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:54.131 22:34:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:54.131 22:34:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:54.131 22:34:12 -- common/autotest_common.sh@10 -- # set +x 00:45:54.131 ************************************ 00:45:54.131 START TEST keyring_file 00:45:54.131 ************************************ 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:45:54.131 * Looking for test storage... 00:45:54.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:54.131 22:34:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:54.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.131 --rc genhtml_branch_coverage=1 00:45:54.131 --rc genhtml_function_coverage=1 00:45:54.131 --rc genhtml_legend=1 00:45:54.131 --rc geninfo_all_blocks=1 00:45:54.131 --rc geninfo_unexecuted_blocks=1 00:45:54.131 00:45:54.131 ' 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:54.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.131 --rc genhtml_branch_coverage=1 00:45:54.131 --rc genhtml_function_coverage=1 00:45:54.131 --rc genhtml_legend=1 00:45:54.131 --rc geninfo_all_blocks=1 00:45:54.131 --rc geninfo_unexecuted_blocks=1 00:45:54.131 00:45:54.131 ' 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:54.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.131 --rc genhtml_branch_coverage=1 00:45:54.131 --rc genhtml_function_coverage=1 00:45:54.131 --rc genhtml_legend=1 00:45:54.131 --rc geninfo_all_blocks=1 00:45:54.131 --rc geninfo_unexecuted_blocks=1 00:45:54.131 00:45:54.131 ' 00:45:54.131 22:34:12 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:54.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:54.131 --rc genhtml_branch_coverage=1 00:45:54.131 --rc genhtml_function_coverage=1 00:45:54.131 --rc genhtml_legend=1 00:45:54.131 --rc geninfo_all_blocks=1 00:45:54.131 --rc geninfo_unexecuted_blocks=1 00:45:54.131 00:45:54.131 ' 00:45:54.131 22:34:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:45:54.131 22:34:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:54.131 22:34:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:54.393 22:34:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:45:54.393 22:34:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:54.393 22:34:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:54.393 22:34:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:54.393 22:34:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.393 22:34:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.393 22:34:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.393 22:34:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:45:54.393 22:34:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:54.393 22:34:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:54.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5f07zuRF5O 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5f07zuRF5O 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5f07zuRF5O 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5f07zuRF5O 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.O3jzKkKf6S 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:45:54.394 22:34:12 keyring_file -- nvmf/common.sh@729 -- # python - 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.O3jzKkKf6S 00:45:54.394 22:34:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.O3jzKkKf6S 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.O3jzKkKf6S 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=3919127 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3919127 00:45:54.394 22:34:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3919127 ']' 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:54.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:54.394 22:34:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:54.394 [2024-10-12 22:34:12.834767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:54.394 [2024-10-12 22:34:12.834846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919127 ] 00:45:54.655 [2024-10-12 22:34:12.916214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:54.655 [2024-10-12 22:34:12.963190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:55.226 22:34:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:55.226 [2024-10-12 22:34:13.630121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:55.226 null0 00:45:55.226 [2024-10-12 22:34:13.662159] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:55.226 [2024-10-12 22:34:13.662585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:55.226 22:34:13 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:55.226 [2024-10-12 22:34:13.694225] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:45:55.226 request: 00:45:55.226 { 00:45:55.226 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:45:55.226 "secure_channel": false, 00:45:55.226 "listen_address": { 00:45:55.226 "trtype": "tcp", 00:45:55.226 "traddr": "127.0.0.1", 00:45:55.226 "trsvcid": "4420" 00:45:55.226 }, 00:45:55.226 "method": "nvmf_subsystem_add_listener", 00:45:55.226 "req_id": 1 00:45:55.226 } 00:45:55.226 Got JSON-RPC error response 00:45:55.226 response: 00:45:55.226 { 00:45:55.226 "code": -32602, 00:45:55.226 "message": "Invalid parameters" 00:45:55.226 } 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:55.226 22:34:13 keyring_file -- keyring/file.sh@47 -- # bperfpid=3919250 00:45:55.226 22:34:13 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3919250 /var/tmp/bperf.sock 00:45:55.226 22:34:13 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3919250 ']' 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:55.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:55.226 22:34:13 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:55.227 22:34:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:45:55.491 [2024-10-12 22:34:13.755481] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:45:55.491 [2024-10-12 22:34:13.755548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919250 ] 00:45:55.491 [2024-10-12 22:34:13.837735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:55.491 [2024-10-12 22:34:13.885656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:45:56.434 22:34:14 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:56.434 22:34:14 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:45:56.434 22:34:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:45:56.434 22:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:45:56.434 22:34:14 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.O3jzKkKf6S 00:45:56.435 22:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.O3jzKkKf6S 00:45:56.695 22:34:14 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:45:56.695 22:34:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:45:56.695 22:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:56.695 22:34:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:56.695 22:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:56.695 22:34:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5f07zuRF5O == \/\t\m\p\/\t\m\p\.\5\f\0\7\z\u\R\F\5\O ]] 00:45:56.695 22:34:15 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:45:56.695 22:34:15 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:45:56.695 22:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:56.695 22:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:56.695 22:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:56.956 22:34:15 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.O3jzKkKf6S == \/\t\m\p\/\t\m\p\.\O\3\j\z\K\k\K\f\6\S ]] 00:45:56.956 22:34:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:45:56.956 22:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:56.956 22:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:56.956 22:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:56.956 22:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:56.956 22:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:57.217 22:34:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:45:57.217 22:34:15 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:45:57.217 22:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:57.217 22:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:57.217 22:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:57.217 22:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:57.217 22:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:57.478 22:34:15 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:45:57.478 22:34:15 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:57.478 22:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:45:57.478 [2024-10-12 22:34:15.880596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:57.478 nvme0n1 00:45:57.739 22:34:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:45:57.739 22:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:57.739 22:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:57.739 22:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:57.739 22:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:57.739 22:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:57.739 22:34:16 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:45:57.739 22:34:16 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:45:57.739 22:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:57.739 22:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:57.739 22:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:57.739 22:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:57.739 22:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:58.000 22:34:16 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:45:58.000 22:34:16 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:58.000 Running I/O for 1 seconds... 00:45:59.385 18557.00 IOPS, 72.49 MiB/s 00:45:59.385 Latency(us) 00:45:59.385 [2024-10-12T20:34:17.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:59.385 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:45:59.385 nvme0n1 : 1.00 18615.86 72.72 0.00 0.00 6862.84 2348.37 13707.95 00:45:59.385 [2024-10-12T20:34:17.874Z] =================================================================================================================== 00:45:59.385 [2024-10-12T20:34:17.874Z] Total : 18615.86 72.72 0.00 0.00 6862.84 2348.37 13707.95 00:45:59.385 { 00:45:59.385 "results": [ 00:45:59.385 { 00:45:59.385 "job": "nvme0n1", 00:45:59.385 "core_mask": "0x2", 00:45:59.385 "workload": "randrw", 00:45:59.385 "percentage": 50, 00:45:59.385 "status": "finished", 00:45:59.385 "queue_depth": 128, 00:45:59.385 "io_size": 4096, 00:45:59.385 "runtime": 1.003768, 00:45:59.385 "iops": 18615.855456639383, 00:45:59.385 "mibps": 72.71818537749759, 00:45:59.385 "io_failed": 0, 00:45:59.385 "io_timeout": 0, 00:45:59.385 "avg_latency_us": 6862.844054372257, 00:45:59.385 "min_latency_us": 2348.3733333333334, 00:45:59.385 "max_latency_us": 13707.946666666667 00:45:59.385 } 00:45:59.385 ], 00:45:59.385 "core_count": 1 00:45:59.385 } 00:45:59.385 22:34:17 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:59.385 22:34:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:59.385 22:34:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:45:59.385 22:34:17 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:59.385 22:34:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:45:59.645 22:34:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:45:59.645 22:34:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:45:59.645 22:34:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:59.645 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:45:59.906 [2024-10-12 22:34:18.215344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:59.906 [2024-10-12 22:34:18.215958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bdd80 (107): Transport endpoint is not connected 00:45:59.906 [2024-10-12 22:34:18.216954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bdd80 (9): Bad file descriptor 00:45:59.906 [2024-10-12 22:34:18.217956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:45:59.906 [2024-10-12 22:34:18.217965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:59.906 [2024-10-12 22:34:18.217971] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:59.906 [2024-10-12 22:34:18.217977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:45:59.906 request: 00:45:59.906 { 00:45:59.906 "name": "nvme0", 00:45:59.906 "trtype": "tcp", 00:45:59.906 "traddr": "127.0.0.1", 00:45:59.906 "adrfam": "ipv4", 00:45:59.906 "trsvcid": "4420", 00:45:59.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:59.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:59.906 "prchk_reftag": false, 00:45:59.906 "prchk_guard": false, 00:45:59.906 "hdgst": false, 00:45:59.906 "ddgst": false, 00:45:59.906 "psk": "key1", 00:45:59.906 "allow_unrecognized_csi": false, 00:45:59.906 "method": "bdev_nvme_attach_controller", 00:45:59.906 "req_id": 1 00:45:59.906 } 00:45:59.906 Got JSON-RPC error response 00:45:59.906 response: 00:45:59.906 { 00:45:59.907 "code": -5, 00:45:59.907 "message": "Input/output error" 00:45:59.907 } 00:45:59.907 22:34:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:45:59.907 22:34:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:45:59.907 22:34:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:45:59.907 22:34:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:45:59.907 22:34:18 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:45:59.907 22:34:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:45:59.907 22:34:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:45:59.907 22:34:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:45:59.907 22:34:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:45:59.907 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:00.168 22:34:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:00.168 22:34:18 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:00.168 22:34:18 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:00.168 22:34:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:00.168 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:00.429 22:34:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:00.429 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:00.429 22:34:18 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:00.429 22:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:00.429 22:34:18 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:00.689 22:34:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:00.689 22:34:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5f07zuRF5O 00:46:00.689 22:34:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:00.689 22:34:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.689 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.950 [2024-10-12 22:34:19.237699] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5f07zuRF5O': 0100660 00:46:00.950 [2024-10-12 22:34:19.237718] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:00.950 request: 00:46:00.950 { 00:46:00.950 "name": "key0", 00:46:00.950 "path": "/tmp/tmp.5f07zuRF5O", 00:46:00.950 "method": "keyring_file_add_key", 00:46:00.950 "req_id": 1 00:46:00.950 } 00:46:00.950 Got JSON-RPC error response 00:46:00.950 response: 00:46:00.950 { 00:46:00.950 "code": -1, 00:46:00.950 "message": "Operation not permitted" 00:46:00.950 } 00:46:00.950 22:34:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:00.950 22:34:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:00.950 22:34:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:00.950 22:34:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:00.950 22:34:19 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5f07zuRF5O 00:46:00.950 22:34:19 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5f07zuRF5O 00:46:00.950 22:34:19 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5f07zuRF5O 00:46:00.950 22:34:19 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:00.950 22:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:01.211 22:34:19 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:01.211 22:34:19 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:01.211 22:34:19 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:01.211 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:01.472 [2024-10-12 22:34:19.763031] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5f07zuRF5O': No such file or directory 00:46:01.472 [2024-10-12 22:34:19.763045] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:01.472 [2024-10-12 22:34:19.763059] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:01.472 [2024-10-12 22:34:19.763065] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:01.472 [2024-10-12 22:34:19.763071] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:01.472 [2024-10-12 22:34:19.763076] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:01.472 request: 00:46:01.472 { 00:46:01.472 "name": "nvme0", 00:46:01.472 "trtype": "tcp", 00:46:01.472 "traddr": "127.0.0.1", 00:46:01.472 "adrfam": "ipv4", 00:46:01.472 "trsvcid": "4420", 00:46:01.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:01.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:01.472 "prchk_reftag": false, 00:46:01.472 "prchk_guard": false, 00:46:01.472 "hdgst": false, 00:46:01.472 "ddgst": false, 00:46:01.472 "psk": "key0", 00:46:01.472 "allow_unrecognized_csi": false, 00:46:01.472 "method": "bdev_nvme_attach_controller", 00:46:01.472 "req_id": 1 00:46:01.472 } 00:46:01.472 Got JSON-RPC error response 00:46:01.472 response: 00:46:01.472 { 00:46:01.472 "code": -19, 00:46:01.472 "message": "No such device" 00:46:01.472 } 00:46:01.472 22:34:19 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:01.472 22:34:19 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:01.472 22:34:19 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:01.472 22:34:19 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:01.472 22:34:19 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:01.472 22:34:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fNT51zB8Df 00:46:01.472 22:34:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:46:01.472 22:34:19 keyring_file -- nvmf/common.sh@729 -- # python - 00:46:01.734 22:34:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fNT51zB8Df 00:46:01.734 22:34:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fNT51zB8Df 00:46:01.734 22:34:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.fNT51zB8Df 00:46:01.734 22:34:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fNT51zB8Df 00:46:01.734 22:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fNT51zB8Df 00:46:01.734 22:34:20 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:01.734 22:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:02.054 nvme0n1 00:46:02.054 22:34:20 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:02.054 22:34:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:02.054 22:34:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:02.054 22:34:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:02.054 22:34:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:02.054 22:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:02.343 22:34:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:02.343 22:34:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:02.343 22:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:02.343 22:34:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:02.343 22:34:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:02.343 22:34:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:02.343 22:34:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:02.343 22:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:02.629 22:34:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:02.629 22:34:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:02.629 22:34:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:02.629 22:34:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:02.629 22:34:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:02.629 22:34:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:02.629 22:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:02.896 22:34:21 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:02.896 22:34:21 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:02.896 22:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:02.896 22:34:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:02.896 22:34:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:02.896 22:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:03.156 22:34:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:03.156 22:34:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fNT51zB8Df 00:46:03.156 22:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fNT51zB8Df 00:46:03.156 22:34:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.O3jzKkKf6S 00:46:03.156 22:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.O3jzKkKf6S 00:46:03.417 22:34:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:03.417 22:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:03.678 nvme0n1 00:46:03.678 22:34:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:03.678 22:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:03.939 22:34:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:03.939 "subsystems": [ 00:46:03.939 { 00:46:03.939 "subsystem": "keyring", 00:46:03.939 "config": [ 00:46:03.939 { 00:46:03.939 "method": "keyring_file_add_key", 00:46:03.939 "params": { 00:46:03.939 "name": "key0", 00:46:03.939 "path": "/tmp/tmp.fNT51zB8Df" 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "keyring_file_add_key", 00:46:03.939 "params": { 00:46:03.939 "name": "key1", 00:46:03.939 "path": "/tmp/tmp.O3jzKkKf6S" 00:46:03.939 } 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "iobuf", 00:46:03.939 "config": [ 00:46:03.939 { 00:46:03.939 "method": "iobuf_set_options", 00:46:03.939 "params": { 00:46:03.939 "small_pool_count": 8192, 00:46:03.939 "large_pool_count": 1024, 00:46:03.939 "small_bufsize": 8192, 00:46:03.939 "large_bufsize": 135168 00:46:03.939 } 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "sock", 00:46:03.939 "config": [ 00:46:03.939 { 00:46:03.939 "method": "sock_set_default_impl", 00:46:03.939 "params": { 00:46:03.939 "impl_name": "posix" 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "sock_impl_set_options", 00:46:03.939 "params": { 00:46:03.939 "impl_name": "ssl", 00:46:03.939 "recv_buf_size": 4096, 00:46:03.939 "send_buf_size": 4096, 00:46:03.939 "enable_recv_pipe": true, 00:46:03.939 "enable_quickack": false, 00:46:03.939 "enable_placement_id": 0, 00:46:03.939 "enable_zerocopy_send_server": true, 00:46:03.939 "enable_zerocopy_send_client": false, 00:46:03.939 "zerocopy_threshold": 0, 00:46:03.939 "tls_version": 0, 00:46:03.939 "enable_ktls": false 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "sock_impl_set_options", 00:46:03.939 "params": { 00:46:03.939 "impl_name": "posix", 00:46:03.939 "recv_buf_size": 2097152, 00:46:03.939 "send_buf_size": 2097152, 00:46:03.939 "enable_recv_pipe": true, 00:46:03.939 "enable_quickack": false, 00:46:03.939 "enable_placement_id": 0, 00:46:03.939 "enable_zerocopy_send_server": true, 00:46:03.939 "enable_zerocopy_send_client": false, 00:46:03.939 "zerocopy_threshold": 0, 00:46:03.939 "tls_version": 0, 00:46:03.939 "enable_ktls": false 00:46:03.939 } 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "vmd", 00:46:03.939 "config": [] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "accel", 00:46:03.939 "config": [ 00:46:03.939 { 00:46:03.939 "method": "accel_set_options", 00:46:03.939 "params": { 00:46:03.939 "small_cache_size": 128, 00:46:03.939 "large_cache_size": 16, 00:46:03.939 "task_count": 2048, 00:46:03.939 "sequence_count": 2048, 00:46:03.939 "buf_count": 2048 00:46:03.939 } 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "bdev", 00:46:03.939 "config": [ 00:46:03.939 { 00:46:03.939 "method": "bdev_set_options", 00:46:03.939 "params": { 00:46:03.939 "bdev_io_pool_size": 65535, 00:46:03.939 "bdev_io_cache_size": 256, 00:46:03.939 "bdev_auto_examine": true, 00:46:03.939 "iobuf_small_cache_size": 128, 00:46:03.939 "iobuf_large_cache_size": 16 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_raid_set_options", 00:46:03.939 "params": { 00:46:03.939 "process_window_size_kb": 1024, 00:46:03.939 "process_max_bandwidth_mb_sec": 0 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_iscsi_set_options", 00:46:03.939 "params": { 00:46:03.939 "timeout_sec": 30 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_nvme_set_options", 00:46:03.939 "params": { 00:46:03.939 "action_on_timeout": "none", 00:46:03.939 "timeout_us": 0, 00:46:03.939 "timeout_admin_us": 0, 00:46:03.939 "keep_alive_timeout_ms": 10000, 00:46:03.939 "arbitration_burst": 0, 00:46:03.939 "low_priority_weight": 0, 00:46:03.939 "medium_priority_weight": 0, 00:46:03.939 "high_priority_weight": 0, 00:46:03.939 "nvme_adminq_poll_period_us": 10000, 00:46:03.939 "nvme_ioq_poll_period_us": 0, 00:46:03.939 "io_queue_requests": 512, 00:46:03.939 "delay_cmd_submit": true, 00:46:03.939 "transport_retry_count": 4, 00:46:03.939 "bdev_retry_count": 3, 00:46:03.939 "transport_ack_timeout": 0, 00:46:03.939 "ctrlr_loss_timeout_sec": 0, 00:46:03.939 "reconnect_delay_sec": 0, 00:46:03.939 "fast_io_fail_timeout_sec": 0, 00:46:03.939 "disable_auto_failback": false, 00:46:03.939 "generate_uuids": false, 00:46:03.939 "transport_tos": 0, 00:46:03.939 "nvme_error_stat": false, 00:46:03.939 "rdma_srq_size": 0, 00:46:03.939 "io_path_stat": false, 00:46:03.939 "allow_accel_sequence": false, 00:46:03.939 "rdma_max_cq_size": 0, 00:46:03.939 "rdma_cm_event_timeout_ms": 0, 00:46:03.939 "dhchap_digests": [ 00:46:03.939 "sha256", 00:46:03.939 "sha384", 00:46:03.939 "sha512" 00:46:03.939 ], 00:46:03.939 "dhchap_dhgroups": [ 00:46:03.939 "null", 00:46:03.939 "ffdhe2048", 00:46:03.939 "ffdhe3072", 00:46:03.939 "ffdhe4096", 00:46:03.939 "ffdhe6144", 00:46:03.939 "ffdhe8192" 00:46:03.939 ] 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_nvme_attach_controller", 00:46:03.939 "params": { 00:46:03.939 "name": "nvme0", 00:46:03.939 "trtype": "TCP", 00:46:03.939 "adrfam": "IPv4", 00:46:03.939 "traddr": "127.0.0.1", 00:46:03.939 "trsvcid": "4420", 00:46:03.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:03.939 "prchk_reftag": false, 00:46:03.939 "prchk_guard": false, 00:46:03.939 "ctrlr_loss_timeout_sec": 0, 00:46:03.939 "reconnect_delay_sec": 0, 00:46:03.939 "fast_io_fail_timeout_sec": 0, 00:46:03.939 "psk": "key0", 00:46:03.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:03.939 "hdgst": false, 00:46:03.939 "ddgst": false 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_nvme_set_hotplug", 00:46:03.939 "params": { 00:46:03.939 "period_us": 100000, 00:46:03.939 "enable": false 00:46:03.939 } 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "method": "bdev_wait_for_examine" 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }, 00:46:03.939 { 00:46:03.939 "subsystem": "nbd", 00:46:03.939 "config": [] 00:46:03.939 } 00:46:03.939 ] 00:46:03.939 }' 00:46:03.939 22:34:22 keyring_file -- keyring/file.sh@115 -- # killprocess 3919250 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3919250 ']' 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3919250 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3919250 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3919250' 00:46:03.939 killing process with pid 3919250 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@969 -- # kill 3919250 00:46:03.939 Received shutdown signal, test time was about 1.000000 seconds 00:46:03.939 00:46:03.939 Latency(us) 00:46:03.939 [2024-10-12T20:34:22.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:03.939 [2024-10-12T20:34:22.428Z] =================================================================================================================== 00:46:03.939 [2024-10-12T20:34:22.428Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:03.939 22:34:22 keyring_file -- common/autotest_common.sh@974 -- # wait 3919250 00:46:04.201 22:34:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=3921063 00:46:04.201 22:34:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3921063 /var/tmp/bperf.sock 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3921063 ']' 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:04.201 22:34:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:04.201 22:34:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:04.201 22:34:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:04.201 "subsystems": [ 00:46:04.201 { 00:46:04.201 "subsystem": "keyring", 00:46:04.201 "config": [ 00:46:04.201 { 00:46:04.201 "method": "keyring_file_add_key", 00:46:04.201 "params": { 00:46:04.201 "name": "key0", 00:46:04.201 "path": "/tmp/tmp.fNT51zB8Df" 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "keyring_file_add_key", 00:46:04.201 "params": { 00:46:04.201 "name": "key1", 00:46:04.201 "path": "/tmp/tmp.O3jzKkKf6S" 00:46:04.201 } 00:46:04.201 } 00:46:04.201 ] 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "subsystem": "iobuf", 00:46:04.201 "config": [ 00:46:04.201 { 00:46:04.201 "method": "iobuf_set_options", 00:46:04.201 "params": { 00:46:04.201 "small_pool_count": 8192, 00:46:04.201 "large_pool_count": 1024, 00:46:04.201 "small_bufsize": 8192, 00:46:04.201 "large_bufsize": 135168 00:46:04.201 } 00:46:04.201 } 00:46:04.201 ] 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "subsystem": "sock", 00:46:04.201 "config": [ 00:46:04.201 { 00:46:04.201 "method": "sock_set_default_impl", 00:46:04.201 "params": { 00:46:04.201 "impl_name": "posix" 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "sock_impl_set_options", 00:46:04.201 "params": { 00:46:04.201 "impl_name": "ssl", 00:46:04.201 "recv_buf_size": 4096, 00:46:04.201 "send_buf_size": 4096, 00:46:04.201 "enable_recv_pipe": true, 00:46:04.201 "enable_quickack": false, 00:46:04.201 "enable_placement_id": 0, 00:46:04.201 "enable_zerocopy_send_server": true, 00:46:04.201 "enable_zerocopy_send_client": false, 00:46:04.201 "zerocopy_threshold": 0, 00:46:04.201 "tls_version": 0, 00:46:04.201 "enable_ktls": false 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "sock_impl_set_options", 00:46:04.201 "params": { 00:46:04.201 "impl_name": "posix", 00:46:04.201 "recv_buf_size": 2097152, 00:46:04.201 "send_buf_size": 2097152, 00:46:04.201 "enable_recv_pipe": true, 00:46:04.201 "enable_quickack": false, 00:46:04.201 "enable_placement_id": 0, 00:46:04.201 "enable_zerocopy_send_server": true, 00:46:04.201 "enable_zerocopy_send_client": false, 00:46:04.201 "zerocopy_threshold": 0, 00:46:04.201 "tls_version": 0, 00:46:04.201 "enable_ktls": false 00:46:04.201 } 00:46:04.201 } 00:46:04.201 ] 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "subsystem": "vmd", 00:46:04.201 "config": [] 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "subsystem": "accel", 00:46:04.201 "config": [ 00:46:04.201 { 00:46:04.201 "method": "accel_set_options", 00:46:04.201 "params": { 00:46:04.201 "small_cache_size": 128, 00:46:04.201 "large_cache_size": 16, 00:46:04.201 "task_count": 2048, 00:46:04.201 "sequence_count": 2048, 00:46:04.201 "buf_count": 2048 00:46:04.201 } 00:46:04.201 } 00:46:04.201 ] 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "subsystem": "bdev", 00:46:04.201 "config": [ 00:46:04.201 { 00:46:04.201 "method": "bdev_set_options", 00:46:04.201 "params": { 00:46:04.201 "bdev_io_pool_size": 65535, 00:46:04.201 "bdev_io_cache_size": 256, 00:46:04.201 "bdev_auto_examine": true, 00:46:04.201 "iobuf_small_cache_size": 128, 00:46:04.201 "iobuf_large_cache_size": 16 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "bdev_raid_set_options", 00:46:04.201 "params": { 00:46:04.201 "process_window_size_kb": 1024, 00:46:04.201 "process_max_bandwidth_mb_sec": 0 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "bdev_iscsi_set_options", 00:46:04.201 "params": { 00:46:04.201 "timeout_sec": 30 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "bdev_nvme_set_options", 00:46:04.201 "params": { 00:46:04.201 "action_on_timeout": "none", 00:46:04.201 "timeout_us": 0, 00:46:04.201 "timeout_admin_us": 0, 00:46:04.201 "keep_alive_timeout_ms": 10000, 00:46:04.201 "arbitration_burst": 0, 00:46:04.201 "low_priority_weight": 0, 00:46:04.201 "medium_priority_weight": 0, 00:46:04.201 "high_priority_weight": 0, 00:46:04.201 "nvme_adminq_poll_period_us": 10000, 00:46:04.201 "nvme_ioq_poll_period_us": 0, 00:46:04.201 "io_queue_requests": 512, 00:46:04.201 "delay_cmd_submit": true, 00:46:04.201 "transport_retry_count": 4, 00:46:04.201 "bdev_retry_count": 3, 00:46:04.201 "transport_ack_timeout": 0, 00:46:04.201 "ctrlr_loss_timeout_sec": 0, 00:46:04.201 "reconnect_delay_sec": 0, 00:46:04.201 "fast_io_fail_timeout_sec": 0, 00:46:04.201 "disable_auto_failback": false, 00:46:04.201 "generate_uuids": false, 00:46:04.201 "transport_tos": 0, 00:46:04.201 "nvme_error_stat": false, 00:46:04.201 "rdma_srq_size": 0, 00:46:04.201 "io_path_stat": false, 00:46:04.201 "allow_accel_sequence": false, 00:46:04.201 "rdma_max_cq_size": 0, 00:46:04.201 "rdma_cm_event_timeout_ms": 0, 00:46:04.201 "dhchap_digests": [ 00:46:04.201 "sha256", 00:46:04.201 "sha384", 00:46:04.201 "sha512" 00:46:04.201 ], 00:46:04.201 "dhchap_dhgroups": [ 00:46:04.201 "null", 00:46:04.201 "ffdhe2048", 00:46:04.201 "ffdhe3072", 00:46:04.201 "ffdhe4096", 00:46:04.201 "ffdhe6144", 00:46:04.201 "ffdhe8192" 00:46:04.201 ] 00:46:04.201 } 00:46:04.201 }, 00:46:04.201 { 00:46:04.201 "method": "bdev_nvme_attach_controller", 00:46:04.201 "params": { 00:46:04.201 "name": "nvme0", 00:46:04.201 "trtype": "TCP", 00:46:04.201 "adrfam": "IPv4", 00:46:04.201 "traddr": "127.0.0.1", 00:46:04.201 "trsvcid": "4420", 00:46:04.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:04.201 "prchk_reftag": false, 00:46:04.201 "prchk_guard": false, 00:46:04.201 "ctrlr_loss_timeout_sec": 0, 00:46:04.201 "reconnect_delay_sec": 0, 00:46:04.201 "fast_io_fail_timeout_sec": 0, 00:46:04.201 "psk": "key0", 00:46:04.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:04.202 "hdgst": false, 00:46:04.202 "ddgst": false 00:46:04.202 } 00:46:04.202 }, 00:46:04.202 { 00:46:04.202 "method": "bdev_nvme_set_hotplug", 00:46:04.202 "params": { 00:46:04.202 "period_us": 100000, 00:46:04.202 "enable": false 00:46:04.202 } 00:46:04.202 }, 00:46:04.202 { 00:46:04.202 "method": "bdev_wait_for_examine" 00:46:04.202 } 00:46:04.202 ] 00:46:04.202 }, 00:46:04.202 { 00:46:04.202 "subsystem": "nbd", 00:46:04.202 "config": [] 00:46:04.202 } 00:46:04.202 ] 00:46:04.202 }' 00:46:04.202 [2024-10-12 22:34:22.505364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:46:04.202 [2024-10-12 22:34:22.505419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921063 ] 00:46:04.202 [2024-10-12 22:34:22.581565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.202 [2024-10-12 22:34:22.609279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:04.462 [2024-10-12 22:34:22.746383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:05.032 22:34:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:05.032 22:34:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:05.032 22:34:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:05.032 22:34:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:05.032 22:34:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:05.032 22:34:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:05.032 22:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:05.293 22:34:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:05.293 22:34:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:05.293 22:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:05.293 22:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:05.293 22:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:05.293 22:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:05.293 22:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:05.554 22:34:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:05.554 22:34:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:05.554 22:34:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:05.554 22:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:05.554 22:34:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:05.554 22:34:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:05.554 22:34:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fNT51zB8Df /tmp/tmp.O3jzKkKf6S 00:46:05.554 22:34:24 keyring_file -- keyring/file.sh@20 -- # killprocess 3921063 00:46:05.554 22:34:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3921063 ']' 00:46:05.554 22:34:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3921063 00:46:05.554 22:34:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:05.554 22:34:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:05.554 22:34:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3921063 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3921063' 00:46:05.814 killing process with pid 3921063 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@969 -- # kill 3921063 00:46:05.814 Received shutdown signal, test time was about 1.000000 seconds 00:46:05.814 00:46:05.814 Latency(us) 00:46:05.814 [2024-10-12T20:34:24.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:05.814 [2024-10-12T20:34:24.303Z] =================================================================================================================== 00:46:05.814 [2024-10-12T20:34:24.303Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@974 -- # wait 3921063 00:46:05.814 22:34:24 keyring_file -- keyring/file.sh@21 -- # killprocess 3919127 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3919127 ']' 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3919127 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3919127 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3919127' 00:46:05.814 killing process with pid 3919127 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@969 -- # kill 3919127 00:46:05.814 22:34:24 keyring_file -- common/autotest_common.sh@974 -- # wait 3919127 00:46:06.077 00:46:06.077 real 0m12.054s 00:46:06.077 user 0m28.976s 00:46:06.077 sys 0m2.803s 00:46:06.077 22:34:24 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:06.077 22:34:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:06.077 ************************************ 00:46:06.077 END TEST keyring_file 00:46:06.077 ************************************ 00:46:06.077 22:34:24 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:46:06.077 22:34:24 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:06.077 22:34:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:06.077 22:34:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:06.077 22:34:24 -- common/autotest_common.sh@10 -- # set +x 00:46:06.077 ************************************ 00:46:06.077 START TEST keyring_linux 00:46:06.077 ************************************ 00:46:06.077 22:34:24 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:06.077 Joined session keyring: 182054049 00:46:06.353 * Looking for test storage... 00:46:06.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:06.353 --rc genhtml_branch_coverage=1 00:46:06.353 --rc genhtml_function_coverage=1 00:46:06.353 --rc genhtml_legend=1 00:46:06.353 --rc geninfo_all_blocks=1 00:46:06.353 --rc geninfo_unexecuted_blocks=1 00:46:06.353 00:46:06.353 ' 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:06.353 --rc genhtml_branch_coverage=1 00:46:06.353 --rc genhtml_function_coverage=1 00:46:06.353 --rc genhtml_legend=1 00:46:06.353 --rc geninfo_all_blocks=1 00:46:06.353 --rc geninfo_unexecuted_blocks=1 00:46:06.353 00:46:06.353 ' 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:06.353 --rc genhtml_branch_coverage=1 00:46:06.353 --rc genhtml_function_coverage=1 00:46:06.353 --rc genhtml_legend=1 00:46:06.353 --rc geninfo_all_blocks=1 00:46:06.353 --rc geninfo_unexecuted_blocks=1 00:46:06.353 00:46:06.353 ' 00:46:06.353 22:34:24 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:06.353 --rc genhtml_branch_coverage=1 00:46:06.353 --rc genhtml_function_coverage=1 00:46:06.353 --rc genhtml_legend=1 00:46:06.353 --rc geninfo_all_blocks=1 00:46:06.353 --rc geninfo_unexecuted_blocks=1 00:46:06.353 00:46:06.353 ' 00:46:06.353 22:34:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:06.353 22:34:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:06.353 22:34:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:06.353 22:34:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:06.353 22:34:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:06.353 22:34:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:06.353 22:34:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:06.353 22:34:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:06.353 22:34:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:06.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@729 -- # python - 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:06.354 /tmp/:spdk-test:key0 00:46:06.354 22:34:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:06.354 22:34:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:46:06.354 22:34:24 keyring_linux -- nvmf/common.sh@729 -- # python - 00:46:06.614 22:34:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:06.614 22:34:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:06.614 /tmp/:spdk-test:key1 00:46:06.614 22:34:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3921507 00:46:06.614 22:34:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3921507 00:46:06.614 22:34:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3921507 ']' 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:06.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:06.614 22:34:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:06.615 [2024-10-12 22:34:24.944883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:46:06.615 [2024-10-12 22:34:24.944959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921507 ] 00:46:06.615 [2024-10-12 22:34:25.026370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:06.615 [2024-10-12 22:34:25.060406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:07.556 [2024-10-12 22:34:25.731552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:07.556 null0 00:46:07.556 [2024-10-12 22:34:25.763606] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:07.556 [2024-10-12 22:34:25.763954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:07.556 476346595 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:07.556 1043183022 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3921797 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3921797 /var/tmp/bperf.sock 00:46:07.556 22:34:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:07.556 22:34:25 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3921797 ']' 00:46:07.557 22:34:25 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:07.557 22:34:25 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:07.557 22:34:25 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:07.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:07.557 22:34:25 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:07.557 22:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:07.557 [2024-10-12 22:34:25.850455] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:46:07.557 [2024-10-12 22:34:25.850506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921797 ] 00:46:07.557 [2024-10-12 22:34:25.923967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:07.557 [2024-10-12 22:34:25.952325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:46:08.498 22:34:26 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:08.498 22:34:26 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:08.498 22:34:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:08.498 22:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:08.498 22:34:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:08.498 22:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:08.759 22:34:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:08.759 22:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:08.759 [2024-10-12 22:34:27.198956] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:09.021 nvme0n1 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:09.021 22:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:09.021 22:34:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:09.021 22:34:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:09.021 22:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:09.021 22:34:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@25 -- # sn=476346595 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 476346595 == \4\7\6\3\4\6\5\9\5 ]] 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 476346595 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:09.282 22:34:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:09.282 Running I/O for 1 seconds... 00:46:10.668 24551.00 IOPS, 95.90 MiB/s 00:46:10.668 Latency(us) 00:46:10.668 [2024-10-12T20:34:29.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:10.668 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:10.668 nvme0n1 : 1.01 24551.91 95.91 0.00 0.00 5198.48 4014.08 13052.59 00:46:10.668 [2024-10-12T20:34:29.157Z] =================================================================================================================== 00:46:10.668 [2024-10-12T20:34:29.157Z] Total : 24551.91 95.91 0.00 0.00 5198.48 4014.08 13052.59 00:46:10.668 { 00:46:10.668 "results": [ 00:46:10.668 { 00:46:10.668 "job": "nvme0n1", 00:46:10.668 "core_mask": "0x2", 00:46:10.668 "workload": "randread", 00:46:10.668 "status": "finished", 00:46:10.668 "queue_depth": 128, 00:46:10.668 "io_size": 4096, 00:46:10.668 "runtime": 1.005217, 00:46:10.668 "iops": 24551.9126715923, 00:46:10.668 "mibps": 95.90590887340743, 00:46:10.668 "io_failed": 0, 00:46:10.668 "io_timeout": 0, 00:46:10.668 "avg_latency_us": 5198.483431658563, 00:46:10.668 "min_latency_us": 4014.08, 00:46:10.668 "max_latency_us": 13052.586666666666 00:46:10.668 } 00:46:10.668 ], 00:46:10.668 "core_count": 1 00:46:10.668 } 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:10.668 22:34:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:10.668 22:34:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:10.668 22:34:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:10.668 22:34:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:10.668 22:34:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:10.668 22:34:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:10.668 22:34:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:10.668 22:34:29 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:10.668 22:34:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:10.929 [2024-10-12 22:34:29.292839] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:10.930 [2024-10-12 22:34:29.293546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5cea0 (107): Transport endpoint is not connected 00:46:10.930 [2024-10-12 22:34:29.294542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5cea0 (9): Bad file descriptor 00:46:10.930 [2024-10-12 22:34:29.295543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:10.930 [2024-10-12 22:34:29.295551] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:10.930 [2024-10-12 22:34:29.295557] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:10.930 [2024-10-12 22:34:29.295564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:10.930 request: 00:46:10.930 { 00:46:10.930 "name": "nvme0", 00:46:10.930 "trtype": "tcp", 00:46:10.930 "traddr": "127.0.0.1", 00:46:10.930 "adrfam": "ipv4", 00:46:10.930 "trsvcid": "4420", 00:46:10.930 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:10.930 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:10.930 "prchk_reftag": false, 00:46:10.930 "prchk_guard": false, 00:46:10.930 "hdgst": false, 00:46:10.930 "ddgst": false, 00:46:10.930 "psk": ":spdk-test:key1", 00:46:10.930 "allow_unrecognized_csi": false, 00:46:10.930 "method": "bdev_nvme_attach_controller", 00:46:10.930 "req_id": 1 00:46:10.930 } 00:46:10.930 Got JSON-RPC error response 00:46:10.930 response: 00:46:10.930 { 00:46:10.930 "code": -5, 00:46:10.930 "message": "Input/output error" 00:46:10.930 } 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@33 -- # sn=476346595 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 476346595 00:46:10.930 1 links removed 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@33 -- # sn=1043183022 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1043183022 00:46:10.930 1 links removed 00:46:10.930 22:34:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3921797 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3921797 ']' 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3921797 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3921797 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3921797' 00:46:10.930 killing process with pid 3921797 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 3921797 00:46:10.930 Received shutdown signal, test time was about 1.000000 seconds 00:46:10.930 00:46:10.930 Latency(us) 00:46:10.930 [2024-10-12T20:34:29.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:10.930 [2024-10-12T20:34:29.419Z] =================================================================================================================== 00:46:10.930 [2024-10-12T20:34:29.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:10.930 22:34:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 3921797 00:46:11.190 22:34:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3921507 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3921507 ']' 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3921507 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3921507 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3921507' 00:46:11.190 killing process with pid 3921507 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@969 -- # kill 3921507 00:46:11.190 22:34:29 keyring_linux -- common/autotest_common.sh@974 -- # wait 3921507 00:46:11.451 00:46:11.451 real 0m5.230s 00:46:11.451 user 0m9.684s 00:46:11.451 sys 0m1.491s 00:46:11.451 22:34:29 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:11.451 22:34:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:11.451 ************************************ 00:46:11.451 END TEST keyring_linux 00:46:11.451 ************************************ 00:46:11.451 22:34:29 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:11.451 22:34:29 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:46:11.451 22:34:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:11.451 22:34:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:11.451 22:34:29 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:46:11.451 22:34:29 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:46:11.451 22:34:29 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:46:11.451 22:34:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:11.451 22:34:29 -- common/autotest_common.sh@10 -- # set +x 00:46:11.451 22:34:29 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:46:11.451 22:34:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:46:11.451 22:34:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:46:11.451 22:34:29 -- common/autotest_common.sh@10 -- # set +x 00:46:19.589 INFO: APP EXITING 00:46:19.589 INFO: killing all VMs 00:46:19.589 INFO: killing vhost app 00:46:19.589 INFO: EXIT DONE 00:46:22.132 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:46:22.132 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:46:22.132 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:65:00.0 (144d a80a): Already using the nvme driver 00:46:22.393 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:46:22.393 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:46:22.654 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:46:22.654 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:46:22.654 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:46:26.857 Cleaning 00:46:26.857 Removing: /var/run/dpdk/spdk0/config 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:46:26.857 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:46:26.858 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:26.858 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:26.858 Removing: /var/run/dpdk/spdk1/config 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:46:26.858 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:46:26.858 Removing: /var/run/dpdk/spdk1/hugepage_info 00:46:26.858 Removing: /var/run/dpdk/spdk2/config 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:46:26.858 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:46:26.858 Removing: /var/run/dpdk/spdk2/hugepage_info 00:46:26.858 Removing: /var/run/dpdk/spdk3/config 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:46:26.858 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:46:26.858 Removing: /var/run/dpdk/spdk3/hugepage_info 00:46:26.858 Removing: /var/run/dpdk/spdk4/config 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:46:26.858 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:46:26.858 Removing: /var/run/dpdk/spdk4/hugepage_info 00:46:26.858 Removing: /dev/shm/bdev_svc_trace.1 00:46:26.858 Removing: /dev/shm/nvmf_trace.0 00:46:26.858 Removing: /dev/shm/spdk_tgt_trace.pid3248612 00:46:26.858 Removing: /var/run/dpdk/spdk0 00:46:26.858 Removing: /var/run/dpdk/spdk1 00:46:26.858 Removing: /var/run/dpdk/spdk2 00:46:26.858 Removing: /var/run/dpdk/spdk3 00:46:26.858 Removing: /var/run/dpdk/spdk4 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3247120 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3248612 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3249473 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3250507 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3250853 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3251919 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3252136 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3252394 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3253531 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3254314 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3254703 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3255037 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3255400 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3255715 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3255955 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3256311 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3256701 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3257795 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3261354 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3261724 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3262093 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3262234 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3262804 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3262819 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3263284 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3263519 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3263884 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3263942 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3264260 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3264408 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3265044 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3265229 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3265516 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3270322 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3275594 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3288329 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3289041 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3294311 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3294814 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3299885 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3307062 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3310388 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3322929 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3334549 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3336576 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3337819 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3358728 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3363638 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3463618 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3470028 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3477742 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3484957 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3484959 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3485964 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3486966 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3487979 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3488639 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3488675 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3488976 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3489145 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3489259 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3490313 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3491315 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3492325 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3492991 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3492997 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3493329 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3494458 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3495843 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3505684 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3540223 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3545664 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3547665 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3549799 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3550036 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3550377 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3550718 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3551429 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3553512 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3554859 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3555310 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3558517 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3559230 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3559935 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3564898 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3571559 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3571561 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3571563 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3576223 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3580950 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3586705 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3630630 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3635439 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3642605 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3644142 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3645677 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3647475 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3653538 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3658559 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3667644 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3667647 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3672693 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3673031 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3673281 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3673704 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3673720 00:46:26.858 Removing: /var/run/dpdk/spdk_pid3675072 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3677069 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3679065 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3681083 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3683080 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3685026 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3692303 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3692961 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3694156 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3695420 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3702101 00:46:27.119 Removing: /var/run/dpdk/spdk_pid3705277 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3711737 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3718354 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3728424 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3737018 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3737070 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3760355 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3761040 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3761764 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3762418 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3763429 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3763993 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3764737 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3765522 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3770573 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3770913 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3777945 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3778321 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3784732 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3789810 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3801739 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3802418 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3807468 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3807816 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3812856 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3819349 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3822324 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3834484 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3845139 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3847003 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3848173 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3867798 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3872436 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3875613 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3883266 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3883327 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3889255 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3891452 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3893688 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3895159 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3897359 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3898966 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3909393 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3909916 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3910422 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3913345 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3914013 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3914462 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3919127 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3919250 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3921063 00:46:27.120 Removing: /var/run/dpdk/spdk_pid3921507 00:46:27.381 Removing: /var/run/dpdk/spdk_pid3921797 00:46:27.381 Clean 00:46:27.381 22:34:45 -- common/autotest_common.sh@1451 -- # return 0 00:46:27.381 22:34:45 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:46:27.381 22:34:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:27.381 22:34:45 -- common/autotest_common.sh@10 -- # set +x 00:46:27.381 22:34:45 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:46:27.381 22:34:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:27.381 22:34:45 -- common/autotest_common.sh@10 -- # set +x 00:46:27.381 22:34:45 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:46:27.381 22:34:45 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:46:27.381 22:34:45 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:46:27.381 22:34:45 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:46:27.381 22:34:45 -- spdk/autotest.sh@394 -- # hostname 00:46:27.381 22:34:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:46:27.642 geninfo: WARNING: invalid characters removed from testname! 00:46:54.232 22:35:11 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:55.615 22:35:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:57.527 22:35:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:46:58.909 22:35:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:00.820 22:35:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:02.200 22:35:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:04.111 22:35:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:04.111 22:35:22 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:47:04.111 22:35:22 -- common/autotest_common.sh@1681 -- $ lcov --version 00:47:04.111 22:35:22 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:47:04.111 22:35:22 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:47:04.111 22:35:22 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:47:04.111 22:35:22 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:47:04.111 22:35:22 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:47:04.111 22:35:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:47:04.111 22:35:22 -- scripts/common.sh@336 -- $ read -ra ver1 00:47:04.111 22:35:22 -- scripts/common.sh@337 -- $ IFS=.-: 00:47:04.111 22:35:22 -- scripts/common.sh@337 -- $ read -ra ver2 00:47:04.111 22:35:22 -- scripts/common.sh@338 -- $ local 'op=<' 00:47:04.111 22:35:22 -- scripts/common.sh@340 -- $ ver1_l=2 00:47:04.111 22:35:22 -- scripts/common.sh@341 -- $ ver2_l=1 00:47:04.111 22:35:22 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:47:04.111 22:35:22 -- scripts/common.sh@344 -- $ case "$op" in 00:47:04.111 22:35:22 -- scripts/common.sh@345 -- $ : 1 00:47:04.111 22:35:22 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:47:04.111 22:35:22 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:04.111 22:35:22 -- scripts/common.sh@365 -- $ decimal 1 00:47:04.111 22:35:22 -- scripts/common.sh@353 -- $ local d=1 00:47:04.111 22:35:22 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:47:04.111 22:35:22 -- scripts/common.sh@355 -- $ echo 1 00:47:04.111 22:35:22 -- scripts/common.sh@365 -- $ ver1[v]=1 00:47:04.111 22:35:22 -- scripts/common.sh@366 -- $ decimal 2 00:47:04.111 22:35:22 -- scripts/common.sh@353 -- $ local d=2 00:47:04.111 22:35:22 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:47:04.112 22:35:22 -- scripts/common.sh@355 -- $ echo 2 00:47:04.112 22:35:22 -- scripts/common.sh@366 -- $ ver2[v]=2 00:47:04.112 22:35:22 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:47:04.112 22:35:22 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:47:04.112 22:35:22 -- scripts/common.sh@368 -- $ return 0 00:47:04.112 22:35:22 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:04.112 22:35:22 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:47:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.112 --rc genhtml_branch_coverage=1 00:47:04.112 --rc genhtml_function_coverage=1 00:47:04.112 --rc genhtml_legend=1 00:47:04.112 --rc geninfo_all_blocks=1 00:47:04.112 --rc geninfo_unexecuted_blocks=1 00:47:04.112 00:47:04.112 ' 00:47:04.112 22:35:22 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:47:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.112 --rc genhtml_branch_coverage=1 00:47:04.112 --rc genhtml_function_coverage=1 00:47:04.112 --rc genhtml_legend=1 00:47:04.112 --rc geninfo_all_blocks=1 00:47:04.112 --rc geninfo_unexecuted_blocks=1 00:47:04.112 00:47:04.112 ' 00:47:04.112 22:35:22 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:47:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.112 --rc genhtml_branch_coverage=1 00:47:04.112 --rc genhtml_function_coverage=1 00:47:04.112 --rc genhtml_legend=1 00:47:04.112 --rc geninfo_all_blocks=1 00:47:04.112 --rc geninfo_unexecuted_blocks=1 00:47:04.112 00:47:04.112 ' 00:47:04.112 22:35:22 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:47:04.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:04.112 --rc genhtml_branch_coverage=1 00:47:04.112 --rc genhtml_function_coverage=1 00:47:04.112 --rc genhtml_legend=1 00:47:04.112 --rc geninfo_all_blocks=1 00:47:04.112 --rc geninfo_unexecuted_blocks=1 00:47:04.112 00:47:04.112 ' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:04.112 22:35:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:47:04.112 22:35:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:04.112 22:35:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:04.112 22:35:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:04.112 22:35:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.112 22:35:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.112 22:35:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.112 22:35:22 -- paths/export.sh@5 -- $ export PATH 00:47:04.112 22:35:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:04.112 22:35:22 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:04.112 22:35:22 -- common/autobuild_common.sh@479 -- $ date +%s 00:47:04.112 22:35:22 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728765322.XXXXXX 00:47:04.112 22:35:22 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728765322.AGCt6j 00:47:04.112 22:35:22 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:47:04.112 22:35:22 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@486 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:47:04.112 22:35:22 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@495 -- $ get_config_params 00:47:04.112 22:35:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:47:04.112 22:35:22 -- common/autotest_common.sh@10 -- $ set +x 00:47:04.112 22:35:22 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:47:04.112 22:35:22 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:47:04.112 22:35:22 -- pm/common@17 -- $ local monitor 00:47:04.112 22:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:04.112 22:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:04.112 22:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:04.112 22:35:22 -- pm/common@21 -- $ date +%s 00:47:04.112 22:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:04.112 22:35:22 -- pm/common@21 -- $ date +%s 00:47:04.112 22:35:22 -- pm/common@25 -- $ sleep 1 00:47:04.112 22:35:22 -- pm/common@21 -- $ date +%s 00:47:04.112 22:35:22 -- pm/common@21 -- $ date +%s 00:47:04.112 22:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728765322 00:47:04.112 22:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728765322 00:47:04.112 22:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728765322 00:47:04.112 22:35:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728765322 00:47:04.112 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728765322_collect-vmstat.pm.log 00:47:04.112 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728765322_collect-cpu-load.pm.log 00:47:04.112 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728765322_collect-cpu-temp.pm.log 00:47:04.112 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728765322_collect-bmc-pm.bmc.pm.log 00:47:05.055 22:35:23 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:47:05.055 22:35:23 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:47:05.055 22:35:23 -- spdk/autopackage.sh@14 -- $ timing_finish 00:47:05.056 22:35:23 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:05.056 22:35:23 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:05.056 22:35:23 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:05.056 22:35:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:05.056 22:35:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:05.056 22:35:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:05.056 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:05.056 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:05.056 22:35:23 -- pm/common@44 -- $ pid=3935722 00:47:05.056 22:35:23 -- pm/common@50 -- $ kill -TERM 3935722 00:47:05.056 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:05.056 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:05.056 22:35:23 -- pm/common@44 -- $ pid=3935723 00:47:05.056 22:35:23 -- pm/common@50 -- $ kill -TERM 3935723 00:47:05.317 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:05.317 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:05.317 22:35:23 -- pm/common@44 -- $ pid=3935725 00:47:05.317 22:35:23 -- pm/common@50 -- $ kill -TERM 3935725 00:47:05.317 22:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:05.317 22:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:05.317 22:35:23 -- pm/common@44 -- $ pid=3935750 00:47:05.317 22:35:23 -- pm/common@50 -- $ sudo -E kill -TERM 3935750 00:47:05.317 + [[ -n 3145685 ]] 00:47:05.317 + sudo kill 3145685 00:47:05.328 [Pipeline] } 00:47:05.340 [Pipeline] // stage 00:47:05.345 [Pipeline] } 00:47:05.357 [Pipeline] // timeout 00:47:05.362 [Pipeline] } 00:47:05.373 [Pipeline] // catchError 00:47:05.378 [Pipeline] } 00:47:05.390 [Pipeline] // wrap 00:47:05.396 [Pipeline] } 00:47:05.407 [Pipeline] // catchError 00:47:05.415 [Pipeline] stage 00:47:05.417 [Pipeline] { (Epilogue) 00:47:05.428 [Pipeline] catchError 00:47:05.430 [Pipeline] { 00:47:05.440 [Pipeline] echo 00:47:05.442 Cleanup processes 00:47:05.446 [Pipeline] sh 00:47:05.732 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:05.732 3935867 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:05.732 3936418 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:05.747 [Pipeline] sh 00:47:06.036 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:06.036 ++ grep -v 'sudo pgrep' 00:47:06.036 ++ awk '{print $1}' 00:47:06.036 + sudo kill -9 3935867 00:47:06.048 [Pipeline] sh 00:47:06.338 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:18.597 [Pipeline] sh 00:47:19.002 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:19.002 Artifacts sizes are good 00:47:19.018 [Pipeline] archiveArtifacts 00:47:19.025 Archiving artifacts 00:47:19.201 [Pipeline] sh 00:47:19.485 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:47:19.501 [Pipeline] cleanWs 00:47:19.512 [WS-CLEANUP] Deleting project workspace... 00:47:19.512 [WS-CLEANUP] Deferred wipeout is used... 00:47:19.519 [WS-CLEANUP] done 00:47:19.521 [Pipeline] } 00:47:19.576 [Pipeline] // catchError 00:47:19.608 [Pipeline] sh 00:47:19.894 + logger -p user.info -t JENKINS-CI 00:47:19.904 [Pipeline] } 00:47:19.917 [Pipeline] // stage 00:47:19.922 [Pipeline] } 00:47:19.936 [Pipeline] // node 00:47:19.942 [Pipeline] End of Pipeline 00:47:20.003 Finished: SUCCESS